Test Report: Docker_Linux_crio_arm64 22128

                    
                      2cb2c94398211ca18cf7c1877ff6bae2d6b3d16e:2025-12-13:42756
                    
                

Test fail (40/316)

Order failed test Duration
38 TestAddons/serial/Volcano 0.77
44 TestAddons/parallel/Registry 14.61
45 TestAddons/parallel/RegistryCreds 0.51
46 TestAddons/parallel/Ingress 144.57
47 TestAddons/parallel/InspektorGadget 5.29
48 TestAddons/parallel/MetricsServer 5.39
50 TestAddons/parallel/CSI 53.99
51 TestAddons/parallel/Headlamp 3.28
52 TestAddons/parallel/CloudSpanner 6.28
53 TestAddons/parallel/LocalPath 8.62
54 TestAddons/parallel/NvidiaDevicePlugin 6.28
55 TestAddons/parallel/Yakd 5.26
171 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy 502.46
173 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart 368.48
175 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods 2.52
185 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd 2.82
186 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly 2.47
187 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig 734.29
188 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth 2.28
191 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService 0.06
194 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd 1.76
197 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd 3.05
201 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect 2.34
203 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim 241.69
213 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels 1.45
219 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel 0.52
222 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup 0.15
223 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect 101.02
228 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp 0.06
229 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List 0.26
230 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput 0.26
231 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS 0.26
232 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format 0.27
233 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL 0.27
237 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port 2.53
293 TestJSONOutput/pause/Command 1.88
299 TestJSONOutput/unpause/Command 1.46
358 TestKubernetesUpgrade 788.86
384 TestPause/serial/Pause 6.96
476 TestNetworkPlugins/group/flannel/Start 7200.078
x
+
TestAddons/serial/Volcano (0.77s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:852: skipping: crio not supported
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-054604 addons disable volcano --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-054604 addons disable volcano --alsologtostderr -v=1: exit status 11 (774.049229ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 10:13:36.619403  914339 out.go:360] Setting OutFile to fd 1 ...
	I1213 10:13:36.620858  914339 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:13:36.620902  914339 out.go:374] Setting ErrFile to fd 2...
	I1213 10:13:36.620924  914339 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:13:36.621270  914339 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-904040/.minikube/bin
	I1213 10:13:36.621633  914339 mustload.go:66] Loading cluster: addons-054604
	I1213 10:13:36.622067  914339 config.go:182] Loaded profile config "addons-054604": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 10:13:36.622104  914339 addons.go:622] checking whether the cluster is paused
	I1213 10:13:36.622238  914339 config.go:182] Loaded profile config "addons-054604": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 10:13:36.622269  914339 host.go:66] Checking if "addons-054604" exists ...
	I1213 10:13:36.622822  914339 cli_runner.go:164] Run: docker container inspect addons-054604 --format={{.State.Status}}
	I1213 10:13:36.640557  914339 ssh_runner.go:195] Run: systemctl --version
	I1213 10:13:36.640609  914339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054604
	I1213 10:13:36.659918  914339 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/addons-054604/id_rsa Username:docker}
	I1213 10:13:36.772154  914339 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 10:13:36.772247  914339 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 10:13:36.812410  914339 cri.go:89] found id: "9dfc412275d47430708bdd9da315ac44e2974752210e0b6c277cd82d7ab105d8"
	I1213 10:13:36.812435  914339 cri.go:89] found id: "411736ab35d3122ee9b77c4fc64fcbda3f988a45ef73863ff00fa52c4fcbb5c7"
	I1213 10:13:36.812440  914339 cri.go:89] found id: "fe7aa6350e217489746a35073aa2da5782e8ada1b47068f824336572cc33c246"
	I1213 10:13:36.812450  914339 cri.go:89] found id: "8a3729518104ccd9e11876774115da030c622637d97b6a762aa635c931085794"
	I1213 10:13:36.812453  914339 cri.go:89] found id: "10f4327e1d3d1b22a00362557e4a294a9b51185d58b7fe190a26ecec8dee2672"
	I1213 10:13:36.812457  914339 cri.go:89] found id: "1380fedb08b07c143b7e5940b5fea400bb730d0da239b5791d19f9a08901e231"
	I1213 10:13:36.812460  914339 cri.go:89] found id: "441a32eb57b513f29176ca9f6dd18328104a9b5fa79ee380cd08a41f7978ec90"
	I1213 10:13:36.812464  914339 cri.go:89] found id: "f871736240871de0f1ef464a002684e2ece515c0dfa8fd5f8d5b13b4e565c68e"
	I1213 10:13:36.812470  914339 cri.go:89] found id: "3ded6e57579bd0a8c2ad26ac6e93cbdb9c7b06cd00dbc61e1e85f832e73f085f"
	I1213 10:13:36.812501  914339 cri.go:89] found id: "b0e2d0e7e16b279110b53187dea2355419b23a459b2cf25f96d88c2db0f68d2b"
	I1213 10:13:36.812511  914339 cri.go:89] found id: "02565e8c756a3812567eee3588c54df43097a3c2581ba9063db5d5e26597a5cc"
	I1213 10:13:36.812515  914339 cri.go:89] found id: "17cb3c6d340024b2539323934bdce363102d990353d8c21c5c48b7842be6369c"
	I1213 10:13:36.812518  914339 cri.go:89] found id: "26436889f5cc97c312f42a74136b21c2c06338b4dfc8ef04436984ecf52e0137"
	I1213 10:13:36.812522  914339 cri.go:89] found id: "43da519983e66338f032f8e084a64f058e11aee1e710391af977a7cac3c7a851"
	I1213 10:13:36.812525  914339 cri.go:89] found id: "b95fb046aaf43fa10b6d2e5c93912f378f6234809f2725f67302ba08933bf075"
	I1213 10:13:36.812530  914339 cri.go:89] found id: "6211c2eaceea48cd7564d6e61228c6caae19ee3c9becf5796dfe85344142c6f9"
	I1213 10:13:36.812537  914339 cri.go:89] found id: "f5883fd88845b71596a62cc554ff445150ecbdc4f555d4ecde337e35133a26a6"
	I1213 10:13:36.812541  914339 cri.go:89] found id: "ef33020503a2d05204007d80967d03b004d2f713bb9d624b96f03468c0ea093d"
	I1213 10:13:36.812544  914339 cri.go:89] found id: "5add978c4ef1694390a3d23a377353da04049787988a6975f63db25d97f83d26"
	I1213 10:13:36.812547  914339 cri.go:89] found id: "dc808fcd2f20cbb36aefb288cc12021843e1d6fb5c3826f37451c82b9ec46a14"
	I1213 10:13:36.812553  914339 cri.go:89] found id: "20394cb8143630b89075746bcaf2fcc0ab2ad362bbfcfdd47a2cd53854bf8283"
	I1213 10:13:36.812556  914339 cri.go:89] found id: "e1f2fa7dc8f92abbea7fb441095c9aeec308a85e7b5d309ca12d373510309517"
	I1213 10:13:36.812573  914339 cri.go:89] found id: "3554210f6ef5f452792fd9b76f594ebd610b0877229d8a31c3107d175d62b9d0"
	I1213 10:13:36.812581  914339 cri.go:89] found id: ""
	I1213 10:13:36.812668  914339 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 10:13:36.841058  914339 out.go:203] 
	W1213 10:13:36.843992  914339 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T10:13:36Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T10:13:36Z" level=error msg="open /run/runc: no such file or directory"
	
	W1213 10:13:36.844028  914339 out.go:285] * 
	* 
	W1213 10:13:37.325343  914339 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 10:13:37.328470  914339 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable volcano addon: args "out/minikube-linux-arm64 -p addons-054604 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.77s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.61s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:384: registry stabilized in 3.735724ms
addons_test.go:386: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-6b586f9694-4bxkh" [74643ad6-13cc-45ef-ad16-f7ecd0873ff9] Running
addons_test.go:386: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003495274s
addons_test.go:389: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-proxy-xclch" [d5e8dae3-581e-4d96-b092-ed60f94f3d00] Running
addons_test.go:389: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003416746s
addons_test.go:394: (dbg) Run:  kubectl --context addons-054604 delete po -l run=registry-test --now
addons_test.go:399: (dbg) Run:  kubectl --context addons-054604 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:399: (dbg) Done: kubectl --context addons-054604 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.058716659s)
addons_test.go:413: (dbg) Run:  out/minikube-linux-arm64 -p addons-054604 ip
2025/12/13 10:14:00 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-054604 addons disable registry --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-054604 addons disable registry --alsologtostderr -v=1: exit status 11 (295.163241ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 10:14:01.019886  915274 out.go:360] Setting OutFile to fd 1 ...
	I1213 10:14:01.020621  915274 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:14:01.020634  915274 out.go:374] Setting ErrFile to fd 2...
	I1213 10:14:01.020640  915274 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:14:01.020916  915274 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-904040/.minikube/bin
	I1213 10:14:01.021204  915274 mustload.go:66] Loading cluster: addons-054604
	I1213 10:14:01.021642  915274 config.go:182] Loaded profile config "addons-054604": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 10:14:01.021661  915274 addons.go:622] checking whether the cluster is paused
	I1213 10:14:01.021772  915274 config.go:182] Loaded profile config "addons-054604": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 10:14:01.021788  915274 host.go:66] Checking if "addons-054604" exists ...
	I1213 10:14:01.022356  915274 cli_runner.go:164] Run: docker container inspect addons-054604 --format={{.State.Status}}
	I1213 10:14:01.040757  915274 ssh_runner.go:195] Run: systemctl --version
	I1213 10:14:01.040819  915274 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054604
	I1213 10:14:01.059986  915274 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/addons-054604/id_rsa Username:docker}
	I1213 10:14:01.172569  915274 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 10:14:01.172643  915274 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 10:14:01.213327  915274 cri.go:89] found id: "9dfc412275d47430708bdd9da315ac44e2974752210e0b6c277cd82d7ab105d8"
	I1213 10:14:01.213349  915274 cri.go:89] found id: "411736ab35d3122ee9b77c4fc64fcbda3f988a45ef73863ff00fa52c4fcbb5c7"
	I1213 10:14:01.213356  915274 cri.go:89] found id: "fe7aa6350e217489746a35073aa2da5782e8ada1b47068f824336572cc33c246"
	I1213 10:14:01.213361  915274 cri.go:89] found id: "8a3729518104ccd9e11876774115da030c622637d97b6a762aa635c931085794"
	I1213 10:14:01.213365  915274 cri.go:89] found id: "10f4327e1d3d1b22a00362557e4a294a9b51185d58b7fe190a26ecec8dee2672"
	I1213 10:14:01.213369  915274 cri.go:89] found id: "1380fedb08b07c143b7e5940b5fea400bb730d0da239b5791d19f9a08901e231"
	I1213 10:14:01.213373  915274 cri.go:89] found id: "441a32eb57b513f29176ca9f6dd18328104a9b5fa79ee380cd08a41f7978ec90"
	I1213 10:14:01.213376  915274 cri.go:89] found id: "f871736240871de0f1ef464a002684e2ece515c0dfa8fd5f8d5b13b4e565c68e"
	I1213 10:14:01.213380  915274 cri.go:89] found id: "3ded6e57579bd0a8c2ad26ac6e93cbdb9c7b06cd00dbc61e1e85f832e73f085f"
	I1213 10:14:01.213388  915274 cri.go:89] found id: "b0e2d0e7e16b279110b53187dea2355419b23a459b2cf25f96d88c2db0f68d2b"
	I1213 10:14:01.213391  915274 cri.go:89] found id: "02565e8c756a3812567eee3588c54df43097a3c2581ba9063db5d5e26597a5cc"
	I1213 10:14:01.213394  915274 cri.go:89] found id: "17cb3c6d340024b2539323934bdce363102d990353d8c21c5c48b7842be6369c"
	I1213 10:14:01.213398  915274 cri.go:89] found id: "26436889f5cc97c312f42a74136b21c2c06338b4dfc8ef04436984ecf52e0137"
	I1213 10:14:01.213402  915274 cri.go:89] found id: "43da519983e66338f032f8e084a64f058e11aee1e710391af977a7cac3c7a851"
	I1213 10:14:01.213405  915274 cri.go:89] found id: "b95fb046aaf43fa10b6d2e5c93912f378f6234809f2725f67302ba08933bf075"
	I1213 10:14:01.213410  915274 cri.go:89] found id: "6211c2eaceea48cd7564d6e61228c6caae19ee3c9becf5796dfe85344142c6f9"
	I1213 10:14:01.213413  915274 cri.go:89] found id: "f5883fd88845b71596a62cc554ff445150ecbdc4f555d4ecde337e35133a26a6"
	I1213 10:14:01.213417  915274 cri.go:89] found id: "ef33020503a2d05204007d80967d03b004d2f713bb9d624b96f03468c0ea093d"
	I1213 10:14:01.213420  915274 cri.go:89] found id: "5add978c4ef1694390a3d23a377353da04049787988a6975f63db25d97f83d26"
	I1213 10:14:01.213428  915274 cri.go:89] found id: "dc808fcd2f20cbb36aefb288cc12021843e1d6fb5c3826f37451c82b9ec46a14"
	I1213 10:14:01.213433  915274 cri.go:89] found id: "20394cb8143630b89075746bcaf2fcc0ab2ad362bbfcfdd47a2cd53854bf8283"
	I1213 10:14:01.213437  915274 cri.go:89] found id: "e1f2fa7dc8f92abbea7fb441095c9aeec308a85e7b5d309ca12d373510309517"
	I1213 10:14:01.213440  915274 cri.go:89] found id: "3554210f6ef5f452792fd9b76f594ebd610b0877229d8a31c3107d175d62b9d0"
	I1213 10:14:01.213443  915274 cri.go:89] found id: ""
	I1213 10:14:01.213498  915274 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 10:14:01.231407  915274 out.go:203] 
	W1213 10:14:01.234223  915274 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T10:14:01Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T10:14:01Z" level=error msg="open /run/runc: no such file or directory"
	
	W1213 10:14:01.234250  915274 out.go:285] * 
	* 
	W1213 10:14:01.241720  915274 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 10:14:01.244679  915274 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable registry addon: args "out/minikube-linux-arm64 -p addons-054604 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (14.61s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.51s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:325: registry-creds stabilized in 3.484431ms
addons_test.go:327: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-054604
addons_test.go:334: (dbg) Run:  kubectl --context addons-054604 -n kube-system get secret -o yaml
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-054604 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-054604 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (276.208246ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 10:14:44.194741  916383 out.go:360] Setting OutFile to fd 1 ...
	I1213 10:14:44.195646  916383 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:14:44.195661  916383 out.go:374] Setting ErrFile to fd 2...
	I1213 10:14:44.195667  916383 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:14:44.195918  916383 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-904040/.minikube/bin
	I1213 10:14:44.196214  916383 mustload.go:66] Loading cluster: addons-054604
	I1213 10:14:44.196652  916383 config.go:182] Loaded profile config "addons-054604": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 10:14:44.196671  916383 addons.go:622] checking whether the cluster is paused
	I1213 10:14:44.196781  916383 config.go:182] Loaded profile config "addons-054604": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 10:14:44.196798  916383 host.go:66] Checking if "addons-054604" exists ...
	I1213 10:14:44.197312  916383 cli_runner.go:164] Run: docker container inspect addons-054604 --format={{.State.Status}}
	I1213 10:14:44.217333  916383 ssh_runner.go:195] Run: systemctl --version
	I1213 10:14:44.217394  916383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054604
	I1213 10:14:44.236164  916383 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/addons-054604/id_rsa Username:docker}
	I1213 10:14:44.344356  916383 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 10:14:44.344455  916383 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 10:14:44.378198  916383 cri.go:89] found id: "9dfc412275d47430708bdd9da315ac44e2974752210e0b6c277cd82d7ab105d8"
	I1213 10:14:44.378221  916383 cri.go:89] found id: "411736ab35d3122ee9b77c4fc64fcbda3f988a45ef73863ff00fa52c4fcbb5c7"
	I1213 10:14:44.378226  916383 cri.go:89] found id: "fe7aa6350e217489746a35073aa2da5782e8ada1b47068f824336572cc33c246"
	I1213 10:14:44.378235  916383 cri.go:89] found id: "8a3729518104ccd9e11876774115da030c622637d97b6a762aa635c931085794"
	I1213 10:14:44.378239  916383 cri.go:89] found id: "10f4327e1d3d1b22a00362557e4a294a9b51185d58b7fe190a26ecec8dee2672"
	I1213 10:14:44.378243  916383 cri.go:89] found id: "1380fedb08b07c143b7e5940b5fea400bb730d0da239b5791d19f9a08901e231"
	I1213 10:14:44.378246  916383 cri.go:89] found id: "441a32eb57b513f29176ca9f6dd18328104a9b5fa79ee380cd08a41f7978ec90"
	I1213 10:14:44.378249  916383 cri.go:89] found id: "f871736240871de0f1ef464a002684e2ece515c0dfa8fd5f8d5b13b4e565c68e"
	I1213 10:14:44.378252  916383 cri.go:89] found id: "3ded6e57579bd0a8c2ad26ac6e93cbdb9c7b06cd00dbc61e1e85f832e73f085f"
	I1213 10:14:44.378259  916383 cri.go:89] found id: "b0e2d0e7e16b279110b53187dea2355419b23a459b2cf25f96d88c2db0f68d2b"
	I1213 10:14:44.378262  916383 cri.go:89] found id: "02565e8c756a3812567eee3588c54df43097a3c2581ba9063db5d5e26597a5cc"
	I1213 10:14:44.378265  916383 cri.go:89] found id: "17cb3c6d340024b2539323934bdce363102d990353d8c21c5c48b7842be6369c"
	I1213 10:14:44.378269  916383 cri.go:89] found id: "26436889f5cc97c312f42a74136b21c2c06338b4dfc8ef04436984ecf52e0137"
	I1213 10:14:44.378273  916383 cri.go:89] found id: "43da519983e66338f032f8e084a64f058e11aee1e710391af977a7cac3c7a851"
	I1213 10:14:44.378276  916383 cri.go:89] found id: "b95fb046aaf43fa10b6d2e5c93912f378f6234809f2725f67302ba08933bf075"
	I1213 10:14:44.378281  916383 cri.go:89] found id: "6211c2eaceea48cd7564d6e61228c6caae19ee3c9becf5796dfe85344142c6f9"
	I1213 10:14:44.378288  916383 cri.go:89] found id: "f5883fd88845b71596a62cc554ff445150ecbdc4f555d4ecde337e35133a26a6"
	I1213 10:14:44.378292  916383 cri.go:89] found id: "ef33020503a2d05204007d80967d03b004d2f713bb9d624b96f03468c0ea093d"
	I1213 10:14:44.378295  916383 cri.go:89] found id: "5add978c4ef1694390a3d23a377353da04049787988a6975f63db25d97f83d26"
	I1213 10:14:44.378298  916383 cri.go:89] found id: "dc808fcd2f20cbb36aefb288cc12021843e1d6fb5c3826f37451c82b9ec46a14"
	I1213 10:14:44.378303  916383 cri.go:89] found id: "20394cb8143630b89075746bcaf2fcc0ab2ad362bbfcfdd47a2cd53854bf8283"
	I1213 10:14:44.378306  916383 cri.go:89] found id: "e1f2fa7dc8f92abbea7fb441095c9aeec308a85e7b5d309ca12d373510309517"
	I1213 10:14:44.378308  916383 cri.go:89] found id: "3554210f6ef5f452792fd9b76f594ebd610b0877229d8a31c3107d175d62b9d0"
	I1213 10:14:44.378311  916383 cri.go:89] found id: ""
	I1213 10:14:44.378370  916383 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 10:14:44.393783  916383 out.go:203] 
	W1213 10:14:44.396902  916383 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T10:14:44Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T10:14:44Z" level=error msg="open /run/runc: no such file or directory"
	
	W1213 10:14:44.396942  916383 out.go:285] * 
	* 
	W1213 10:14:44.404478  916383 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 10:14:44.407526  916383 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable registry-creds addon: args "out/minikube-linux-arm64 -p addons-054604 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.51s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (144.57s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:211: (dbg) Run:  kubectl --context addons-054604 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:236: (dbg) Run:  kubectl --context addons-054604 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:249: (dbg) Run:  kubectl --context addons-054604 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:353: "nginx" [606bbc87-0744-47d1-be8c-9e6a80278475] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx" [606bbc87-0744-47d1-be8c-9e6a80278475] Running
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.01802667s
I1213 10:14:22.595831  907484 kapi.go:150] Service nginx in namespace default found.
addons_test.go:266: (dbg) Run:  out/minikube-linux-arm64 -p addons-054604 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:266: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-054604 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.992197228s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:282: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:290: (dbg) Run:  kubectl --context addons-054604 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:295: (dbg) Run:  out/minikube-linux-arm64 -p addons-054604 ip
addons_test.go:301: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect addons-054604
helpers_test.go:244: (dbg) docker inspect addons-054604:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "218e0b6bff8529458d50df21ae3b67480ee0457432734dc8a39716faf5b2e157",
	        "Created": "2025-12-13T10:11:22.946692325Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 908862,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T10:11:23.023865917Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/218e0b6bff8529458d50df21ae3b67480ee0457432734dc8a39716faf5b2e157/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/218e0b6bff8529458d50df21ae3b67480ee0457432734dc8a39716faf5b2e157/hostname",
	        "HostsPath": "/var/lib/docker/containers/218e0b6bff8529458d50df21ae3b67480ee0457432734dc8a39716faf5b2e157/hosts",
	        "LogPath": "/var/lib/docker/containers/218e0b6bff8529458d50df21ae3b67480ee0457432734dc8a39716faf5b2e157/218e0b6bff8529458d50df21ae3b67480ee0457432734dc8a39716faf5b2e157-json.log",
	        "Name": "/addons-054604",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-054604:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-054604",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "218e0b6bff8529458d50df21ae3b67480ee0457432734dc8a39716faf5b2e157",
	                "LowerDir": "/var/lib/docker/overlay2/4063f45ad3ed67ea0721bc07c2541d19d56758bb4ff3400c87130d0b4615befa-init/diff:/var/lib/docker/overlay2/ae644fe0cc2841f5eea1cee1fab5fa62406b5368ff2c4f1e7ca42815e94a37ad/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4063f45ad3ed67ea0721bc07c2541d19d56758bb4ff3400c87130d0b4615befa/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4063f45ad3ed67ea0721bc07c2541d19d56758bb4ff3400c87130d0b4615befa/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4063f45ad3ed67ea0721bc07c2541d19d56758bb4ff3400c87130d0b4615befa/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-054604",
	                "Source": "/var/lib/docker/volumes/addons-054604/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-054604",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-054604",
	                "name.minikube.sigs.k8s.io": "addons-054604",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f3db1d6b6f368ceb4716970ec9fd435d05a011f4975272efc2129da757494a4c",
	            "SandboxKey": "/var/run/docker/netns/f3db1d6b6f36",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33508"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33509"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33512"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33510"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33511"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-054604": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ba:50:02:0f:dd:c4",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9a8e1f2933777fab31ca9aef9d89fb8e54c41a232069e86fd3fdde7c2068c9f7",
	                    "EndpointID": "a7a467f80d7425e76e6b8f66ccc4ad895f571ca53eead719cd6789557e127d57",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-054604",
	                        "218e0b6bff85"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-054604 -n addons-054604
helpers_test.go:253: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p addons-054604 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p addons-054604 logs -n 25: (1.440870014s)
helpers_test.go:261: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-docker-457146                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-457146 │ jenkins │ v1.37.0 │ 13 Dec 25 10:11 UTC │ 13 Dec 25 10:11 UTC │
	│ start   │ --download-only -p binary-mirror-558778 --alsologtostderr --binary-mirror http://127.0.0.1:34767 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-558778   │ jenkins │ v1.37.0 │ 13 Dec 25 10:11 UTC │                     │
	│ delete  │ -p binary-mirror-558778                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-558778   │ jenkins │ v1.37.0 │ 13 Dec 25 10:11 UTC │ 13 Dec 25 10:11 UTC │
	│ addons  │ enable dashboard -p addons-054604                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-054604          │ jenkins │ v1.37.0 │ 13 Dec 25 10:11 UTC │                     │
	│ addons  │ disable dashboard -p addons-054604                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-054604          │ jenkins │ v1.37.0 │ 13 Dec 25 10:11 UTC │                     │
	│ start   │ -p addons-054604 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-054604          │ jenkins │ v1.37.0 │ 13 Dec 25 10:11 UTC │ 13 Dec 25 10:13 UTC │
	│ addons  │ addons-054604 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-054604          │ jenkins │ v1.37.0 │ 13 Dec 25 10:13 UTC │                     │
	│ addons  │ addons-054604 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-054604          │ jenkins │ v1.37.0 │ 13 Dec 25 10:13 UTC │                     │
	│ addons  │ enable headlamp -p addons-054604 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-054604          │ jenkins │ v1.37.0 │ 13 Dec 25 10:13 UTC │                     │
	│ addons  │ addons-054604 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-054604          │ jenkins │ v1.37.0 │ 13 Dec 25 10:13 UTC │                     │
	│ ip      │ addons-054604 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-054604          │ jenkins │ v1.37.0 │ 13 Dec 25 10:14 UTC │ 13 Dec 25 10:14 UTC │
	│ addons  │ addons-054604 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-054604          │ jenkins │ v1.37.0 │ 13 Dec 25 10:14 UTC │                     │
	│ addons  │ addons-054604 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-054604          │ jenkins │ v1.37.0 │ 13 Dec 25 10:14 UTC │                     │
	│ addons  │ addons-054604 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-054604          │ jenkins │ v1.37.0 │ 13 Dec 25 10:14 UTC │                     │
	│ ssh     │ addons-054604 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-054604          │ jenkins │ v1.37.0 │ 13 Dec 25 10:14 UTC │                     │
	│ addons  │ addons-054604 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-054604          │ jenkins │ v1.37.0 │ 13 Dec 25 10:14 UTC │                     │
	│ addons  │ addons-054604 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-054604          │ jenkins │ v1.37.0 │ 13 Dec 25 10:14 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-054604                                                                                                                                                                                                                                                                                                                                                                                           │ addons-054604          │ jenkins │ v1.37.0 │ 13 Dec 25 10:14 UTC │ 13 Dec 25 10:14 UTC │
	│ addons  │ addons-054604 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-054604          │ jenkins │ v1.37.0 │ 13 Dec 25 10:14 UTC │                     │
	│ addons  │ addons-054604 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-054604          │ jenkins │ v1.37.0 │ 13 Dec 25 10:14 UTC │                     │
	│ addons  │ addons-054604 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-054604          │ jenkins │ v1.37.0 │ 13 Dec 25 10:14 UTC │                     │
	│ ssh     │ addons-054604 ssh cat /opt/local-path-provisioner/pvc-39853fcc-b135-458e-957a-4cf093e2ffac_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-054604          │ jenkins │ v1.37.0 │ 13 Dec 25 10:15 UTC │ 13 Dec 25 10:15 UTC │
	│ addons  │ addons-054604 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-054604          │ jenkins │ v1.37.0 │ 13 Dec 25 10:15 UTC │                     │
	│ addons  │ addons-054604 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-054604          │ jenkins │ v1.37.0 │ 13 Dec 25 10:15 UTC │                     │
	│ ip      │ addons-054604 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-054604          │ jenkins │ v1.37.0 │ 13 Dec 25 10:16 UTC │ 13 Dec 25 10:16 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 10:11:15
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 10:11:15.986942  908469 out.go:360] Setting OutFile to fd 1 ...
	I1213 10:11:15.987076  908469 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:11:15.987086  908469 out.go:374] Setting ErrFile to fd 2...
	I1213 10:11:15.987092  908469 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:11:15.987335  908469 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-904040/.minikube/bin
	I1213 10:11:15.987765  908469 out.go:368] Setting JSON to false
	I1213 10:11:15.988600  908469 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":17625,"bootTime":1765603051,"procs":158,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1213 10:11:15.988669  908469 start.go:143] virtualization:  
	I1213 10:11:15.992589  908469 out.go:179] * [addons-054604] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 10:11:15.995844  908469 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 10:11:15.995932  908469 notify.go:221] Checking for updates...
	I1213 10:11:16.003168  908469 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 10:11:16.007097  908469 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22128-904040/kubeconfig
	I1213 10:11:16.010417  908469 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-904040/.minikube
	I1213 10:11:16.013503  908469 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 10:11:16.016685  908469 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 10:11:16.019977  908469 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 10:11:16.051149  908469 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 10:11:16.051295  908469 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:11:16.109237  908469 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-13 10:11:16.099536235 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:11:16.109351  908469 docker.go:319] overlay module found
	I1213 10:11:16.112400  908469 out.go:179] * Using the docker driver based on user configuration
	I1213 10:11:16.115149  908469 start.go:309] selected driver: docker
	I1213 10:11:16.115168  908469 start.go:927] validating driver "docker" against <nil>
	I1213 10:11:16.115182  908469 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 10:11:16.115911  908469 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:11:16.167793  908469 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-13 10:11:16.158670607 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:11:16.167948  908469 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1213 10:11:16.168176  908469 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 10:11:16.171230  908469 out.go:179] * Using Docker driver with root privileges
	I1213 10:11:16.174202  908469 cni.go:84] Creating CNI manager for ""
	I1213 10:11:16.174268  908469 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 10:11:16.174279  908469 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1213 10:11:16.174357  908469 start.go:353] cluster config:
	{Name:addons-054604 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-054604 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1213 10:11:16.177412  908469 out.go:179] * Starting "addons-054604" primary control-plane node in "addons-054604" cluster
	I1213 10:11:16.180228  908469 cache.go:134] Beginning downloading kic base image for docker with crio
	I1213 10:11:16.183111  908469 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 10:11:16.185987  908469 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 10:11:16.186032  908469 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 10:11:16.186038  908469 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22128-904040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1213 10:11:16.186061  908469 cache.go:65] Caching tarball of preloaded images
	I1213 10:11:16.186152  908469 preload.go:238] Found /home/jenkins/minikube-integration/22128-904040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1213 10:11:16.186162  908469 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1213 10:11:16.186519  908469 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/addons-054604/config.json ...
	I1213 10:11:16.186550  908469 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/addons-054604/config.json: {Name:mka40e27ef638482f1994511ba19eb0581f749b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:11:16.205346  908469 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 10:11:16.205368  908469 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 10:11:16.205388  908469 cache.go:243] Successfully downloaded all kic artifacts
	I1213 10:11:16.205429  908469 start.go:360] acquireMachinesLock for addons-054604: {Name:mkf7dc8f8e3dcd32bb06bccf10d7da8a028997c7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 10:11:16.205565  908469 start.go:364] duration metric: took 113.782µs to acquireMachinesLock for "addons-054604"
	I1213 10:11:16.205623  908469 start.go:93] Provisioning new machine with config: &{Name:addons-054604 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-054604 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 10:11:16.205709  908469 start.go:125] createHost starting for "" (driver="docker")
	I1213 10:11:16.210941  908469 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1213 10:11:16.211214  908469 start.go:159] libmachine.API.Create for "addons-054604" (driver="docker")
	I1213 10:11:16.211254  908469 client.go:173] LocalClient.Create starting
	I1213 10:11:16.211366  908469 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca.pem
	I1213 10:11:16.391638  908469 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/cert.pem
	I1213 10:11:16.586284  908469 cli_runner.go:164] Run: docker network inspect addons-054604 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1213 10:11:16.602945  908469 cli_runner.go:211] docker network inspect addons-054604 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1213 10:11:16.603038  908469 network_create.go:284] running [docker network inspect addons-054604] to gather additional debugging logs...
	I1213 10:11:16.603061  908469 cli_runner.go:164] Run: docker network inspect addons-054604
	W1213 10:11:16.619370  908469 cli_runner.go:211] docker network inspect addons-054604 returned with exit code 1
	I1213 10:11:16.619434  908469 network_create.go:287] error running [docker network inspect addons-054604]: docker network inspect addons-054604: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-054604 not found
	I1213 10:11:16.619452  908469 network_create.go:289] output of [docker network inspect addons-054604]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-054604 not found
	
	** /stderr **
	I1213 10:11:16.619558  908469 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 10:11:16.636487  908469 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a546e0}
	I1213 10:11:16.636530  908469 network_create.go:124] attempt to create docker network addons-054604 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1213 10:11:16.636595  908469 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-054604 addons-054604
	I1213 10:11:16.695687  908469 network_create.go:108] docker network addons-054604 192.168.49.0/24 created
	I1213 10:11:16.695738  908469 kic.go:121] calculated static IP "192.168.49.2" for the "addons-054604" container
	I1213 10:11:16.695823  908469 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1213 10:11:16.712374  908469 cli_runner.go:164] Run: docker volume create addons-054604 --label name.minikube.sigs.k8s.io=addons-054604 --label created_by.minikube.sigs.k8s.io=true
	I1213 10:11:16.730051  908469 oci.go:103] Successfully created a docker volume addons-054604
	I1213 10:11:16.730134  908469 cli_runner.go:164] Run: docker run --rm --name addons-054604-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-054604 --entrypoint /usr/bin/test -v addons-054604:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1213 10:11:18.807819  908469 cli_runner.go:217] Completed: docker run --rm --name addons-054604-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-054604 --entrypoint /usr/bin/test -v addons-054604:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib: (2.077646129s)
	I1213 10:11:18.807853  908469 oci.go:107] Successfully prepared a docker volume addons-054604
	I1213 10:11:18.807892  908469 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 10:11:18.807904  908469 kic.go:194] Starting extracting preloaded images to volume ...
	I1213 10:11:18.807982  908469 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22128-904040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-054604:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
	I1213 10:11:22.878705  908469 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22128-904040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-054604:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (4.070678289s)
	I1213 10:11:22.878736  908469 kic.go:203] duration metric: took 4.070828387s to extract preloaded images to volume ...
	W1213 10:11:22.878900  908469 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1213 10:11:22.879012  908469 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1213 10:11:22.932064  908469 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-054604 --name addons-054604 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-054604 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-054604 --network addons-054604 --ip 192.168.49.2 --volume addons-054604:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1213 10:11:23.263191  908469 cli_runner.go:164] Run: docker container inspect addons-054604 --format={{.State.Running}}
	I1213 10:11:23.285327  908469 cli_runner.go:164] Run: docker container inspect addons-054604 --format={{.State.Status}}
	I1213 10:11:23.315372  908469 cli_runner.go:164] Run: docker exec addons-054604 stat /var/lib/dpkg/alternatives/iptables
	I1213 10:11:23.369758  908469 oci.go:144] the created container "addons-054604" has a running status.
	I1213 10:11:23.369789  908469 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22128-904040/.minikube/machines/addons-054604/id_rsa...
	I1213 10:11:23.537886  908469 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22128-904040/.minikube/machines/addons-054604/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1213 10:11:23.563400  908469 cli_runner.go:164] Run: docker container inspect addons-054604 --format={{.State.Status}}
	I1213 10:11:23.595351  908469 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1213 10:11:23.595378  908469 kic_runner.go:114] Args: [docker exec --privileged addons-054604 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1213 10:11:23.664690  908469 cli_runner.go:164] Run: docker container inspect addons-054604 --format={{.State.Status}}
	I1213 10:11:23.687150  908469 machine.go:94] provisionDockerMachine start ...
	I1213 10:11:23.687260  908469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054604
	I1213 10:11:23.712803  908469 main.go:143] libmachine: Using SSH client type: native
	I1213 10:11:23.713120  908469 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33508 <nil> <nil>}
	I1213 10:11:23.713128  908469 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 10:11:23.713731  908469 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50672->127.0.0.1:33508: read: connection reset by peer
	I1213 10:11:26.865447  908469 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-054604
	
	I1213 10:11:26.865481  908469 ubuntu.go:182] provisioning hostname "addons-054604"
	I1213 10:11:26.865582  908469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054604
	I1213 10:11:26.883311  908469 main.go:143] libmachine: Using SSH client type: native
	I1213 10:11:26.883662  908469 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33508 <nil> <nil>}
	I1213 10:11:26.883674  908469 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-054604 && echo "addons-054604" | sudo tee /etc/hostname
	I1213 10:11:27.043713  908469 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-054604
	
	I1213 10:11:27.043814  908469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054604
	I1213 10:11:27.061851  908469 main.go:143] libmachine: Using SSH client type: native
	I1213 10:11:27.062191  908469 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33508 <nil> <nil>}
	I1213 10:11:27.062216  908469 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-054604' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-054604/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-054604' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 10:11:27.213982  908469 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 10:11:27.214011  908469 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22128-904040/.minikube CaCertPath:/home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22128-904040/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22128-904040/.minikube}
	I1213 10:11:27.214045  908469 ubuntu.go:190] setting up certificates
	I1213 10:11:27.214066  908469 provision.go:84] configureAuth start
	I1213 10:11:27.214134  908469 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-054604
	I1213 10:11:27.230677  908469 provision.go:143] copyHostCerts
	I1213 10:11:27.230773  908469 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22128-904040/.minikube/ca.pem (1082 bytes)
	I1213 10:11:27.230902  908469 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22128-904040/.minikube/cert.pem (1123 bytes)
	I1213 10:11:27.230964  908469 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22128-904040/.minikube/key.pem (1675 bytes)
	I1213 10:11:27.231019  908469 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22128-904040/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca-key.pem org=jenkins.addons-054604 san=[127.0.0.1 192.168.49.2 addons-054604 localhost minikube]
	I1213 10:11:27.531851  908469 provision.go:177] copyRemoteCerts
	I1213 10:11:27.531934  908469 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 10:11:27.531975  908469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054604
	I1213 10:11:27.549579  908469 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/addons-054604/id_rsa Username:docker}
	I1213 10:11:27.653412  908469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 10:11:27.671161  908469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 10:11:27.688998  908469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1213 10:11:27.706451  908469 provision.go:87] duration metric: took 492.356677ms to configureAuth
	I1213 10:11:27.706482  908469 ubuntu.go:206] setting minikube options for container-runtime
	I1213 10:11:27.706674  908469 config.go:182] Loaded profile config "addons-054604": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 10:11:27.706788  908469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054604
	I1213 10:11:27.723681  908469 main.go:143] libmachine: Using SSH client type: native
	I1213 10:11:27.724032  908469 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33508 <nil> <nil>}
	I1213 10:11:27.724051  908469 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 10:11:28.165728  908469 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 10:11:28.165765  908469 machine.go:97] duration metric: took 4.478582135s to provisionDockerMachine
	I1213 10:11:28.165776  908469 client.go:176] duration metric: took 11.954512595s to LocalClient.Create
	I1213 10:11:28.165790  908469 start.go:167] duration metric: took 11.954576842s to libmachine.API.Create "addons-054604"
	I1213 10:11:28.165797  908469 start.go:293] postStartSetup for "addons-054604" (driver="docker")
	I1213 10:11:28.165807  908469 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 10:11:28.165891  908469 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 10:11:28.165935  908469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054604
	I1213 10:11:28.184151  908469 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/addons-054604/id_rsa Username:docker}
	I1213 10:11:28.289163  908469 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 10:11:28.292356  908469 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 10:11:28.292389  908469 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 10:11:28.292403  908469 filesync.go:126] Scanning /home/jenkins/minikube-integration/22128-904040/.minikube/addons for local assets ...
	I1213 10:11:28.292470  908469 filesync.go:126] Scanning /home/jenkins/minikube-integration/22128-904040/.minikube/files for local assets ...
	I1213 10:11:28.292499  908469 start.go:296] duration metric: took 126.696518ms for postStartSetup
	I1213 10:11:28.292816  908469 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-054604
	I1213 10:11:28.309666  908469 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/addons-054604/config.json ...
	I1213 10:11:28.309961  908469 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 10:11:28.310011  908469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054604
	I1213 10:11:28.326796  908469 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/addons-054604/id_rsa Username:docker}
	I1213 10:11:28.430594  908469 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 10:11:28.435144  908469 start.go:128] duration metric: took 12.229419889s to createHost
	I1213 10:11:28.435174  908469 start.go:83] releasing machines lock for "addons-054604", held for 12.229570422s
	I1213 10:11:28.435245  908469 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-054604
	I1213 10:11:28.451771  908469 ssh_runner.go:195] Run: cat /version.json
	I1213 10:11:28.451804  908469 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 10:11:28.451823  908469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054604
	I1213 10:11:28.451857  908469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054604
	I1213 10:11:28.473169  908469 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/addons-054604/id_rsa Username:docker}
	I1213 10:11:28.483518  908469 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/addons-054604/id_rsa Username:docker}
	I1213 10:11:28.664257  908469 ssh_runner.go:195] Run: systemctl --version
	I1213 10:11:28.670875  908469 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 10:11:28.716450  908469 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 10:11:28.720676  908469 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 10:11:28.720816  908469 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 10:11:28.748799  908469 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1213 10:11:28.748834  908469 start.go:496] detecting cgroup driver to use...
	I1213 10:11:28.748868  908469 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 10:11:28.748927  908469 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 10:11:28.765590  908469 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 10:11:28.779112  908469 docker.go:218] disabling cri-docker service (if available) ...
	I1213 10:11:28.779177  908469 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 10:11:28.797176  908469 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 10:11:28.815953  908469 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 10:11:28.936324  908469 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 10:11:29.056414  908469 docker.go:234] disabling docker service ...
	I1213 10:11:29.056535  908469 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 10:11:29.079427  908469 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 10:11:29.093062  908469 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 10:11:29.202060  908469 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 10:11:29.321704  908469 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 10:11:29.334540  908469 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 10:11:29.349120  908469 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 10:11:29.349186  908469 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:11:29.358108  908469 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 10:11:29.358191  908469 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:11:29.367488  908469 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:11:29.375826  908469 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:11:29.384816  908469 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 10:11:29.392790  908469 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:11:29.401425  908469 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:11:29.415082  908469 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:11:29.423865  908469 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 10:11:29.431260  908469 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 10:11:29.438673  908469 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:11:29.563148  908469 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 10:11:29.760125  908469 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 10:11:29.760246  908469 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 10:11:29.763780  908469 start.go:564] Will wait 60s for crictl version
	I1213 10:11:29.763859  908469 ssh_runner.go:195] Run: which crictl
	I1213 10:11:29.767171  908469 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 10:11:29.797081  908469 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1213 10:11:29.797212  908469 ssh_runner.go:195] Run: crio --version
	I1213 10:11:29.827143  908469 ssh_runner.go:195] Run: crio --version
	I1213 10:11:29.864448  908469 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1213 10:11:29.867207  908469 cli_runner.go:164] Run: docker network inspect addons-054604 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 10:11:29.884089  908469 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1213 10:11:29.888172  908469 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 10:11:29.898627  908469 kubeadm.go:884] updating cluster {Name:addons-054604 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-054604 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 10:11:29.898757  908469 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 10:11:29.898851  908469 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 10:11:29.938612  908469 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 10:11:29.938640  908469 crio.go:433] Images already preloaded, skipping extraction
	I1213 10:11:29.938700  908469 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 10:11:29.969010  908469 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 10:11:29.969034  908469 cache_images.go:86] Images are preloaded, skipping loading
	I1213 10:11:29.969042  908469 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.2 crio true true} ...
	I1213 10:11:29.969141  908469 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-054604 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:addons-054604 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 10:11:29.969235  908469 ssh_runner.go:195] Run: crio config
	I1213 10:11:30.053947  908469 cni.go:84] Creating CNI manager for ""
	I1213 10:11:30.053987  908469 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 10:11:30.054005  908469 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 10:11:30.054061  908469 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-054604 NodeName:addons-054604 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 10:11:30.054255  908469 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-054604"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 10:11:30.054386  908469 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1213 10:11:30.064191  908469 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 10:11:30.064288  908469 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 10:11:30.075251  908469 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1213 10:11:30.091710  908469 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 10:11:30.107458  908469 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1213 10:11:30.123050  908469 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1213 10:11:30.127378  908469 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 10:11:30.139504  908469 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:11:30.263355  908469 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 10:11:30.279385  908469 certs.go:69] Setting up /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/addons-054604 for IP: 192.168.49.2
	I1213 10:11:30.279460  908469 certs.go:195] generating shared ca certs ...
	I1213 10:11:30.279491  908469 certs.go:227] acquiring lock for ca certs: {Name:mk8a4f8a0a31c02fdf751ce601bdbbea6f5a03e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:11:30.279662  908469 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22128-904040/.minikube/ca.key
	I1213 10:11:30.809770  908469 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22128-904040/.minikube/ca.crt ...
	I1213 10:11:30.809804  908469 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-904040/.minikube/ca.crt: {Name:mke9af723c1802ab5f9881f377ee1cc145a10625 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:11:30.810039  908469 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22128-904040/.minikube/ca.key ...
	I1213 10:11:30.810056  908469 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-904040/.minikube/ca.key: {Name:mk35b68be8531b2f3c3930895b2758ea9f2d9c3b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:11:30.810147  908469 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22128-904040/.minikube/proxy-client-ca.key
	I1213 10:11:30.986161  908469 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22128-904040/.minikube/proxy-client-ca.crt ...
	I1213 10:11:30.986195  908469 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-904040/.minikube/proxy-client-ca.crt: {Name:mk05ef8b7a67caf7d58435e6dc3055b3f8800763 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:11:30.986374  908469 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22128-904040/.minikube/proxy-client-ca.key ...
	I1213 10:11:30.986391  908469 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-904040/.minikube/proxy-client-ca.key: {Name:mk4187a64c78da2cf099426e1ad8e6cb90229bc7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:11:30.986467  908469 certs.go:257] generating profile certs ...
	I1213 10:11:30.986529  908469 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/addons-054604/client.key
	I1213 10:11:30.986549  908469 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/addons-054604/client.crt with IP's: []
	I1213 10:11:31.175112  908469 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/addons-054604/client.crt ...
	I1213 10:11:31.175147  908469 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/addons-054604/client.crt: {Name:mk05007e0f0f8a2cee63a7e5c259d597b9174c9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:11:31.175347  908469 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/addons-054604/client.key ...
	I1213 10:11:31.175362  908469 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/addons-054604/client.key: {Name:mk026d4ab665fd6d0f8cd3a2cfb67ffe0df375e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:11:31.175450  908469 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/addons-054604/apiserver.key.f18dc9ff
	I1213 10:11:31.175480  908469 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/addons-054604/apiserver.crt.f18dc9ff with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1213 10:11:31.323609  908469 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/addons-054604/apiserver.crt.f18dc9ff ...
	I1213 10:11:31.323643  908469 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/addons-054604/apiserver.crt.f18dc9ff: {Name:mk46ba298c7b9377bbae5f93060762fcd3f2448a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:11:31.323827  908469 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/addons-054604/apiserver.key.f18dc9ff ...
	I1213 10:11:31.323842  908469 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/addons-054604/apiserver.key.f18dc9ff: {Name:mk266a0ccecc3d5157687879e70164ea26a8f1b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:11:31.323942  908469 certs.go:382] copying /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/addons-054604/apiserver.crt.f18dc9ff -> /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/addons-054604/apiserver.crt
	I1213 10:11:31.324035  908469 certs.go:386] copying /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/addons-054604/apiserver.key.f18dc9ff -> /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/addons-054604/apiserver.key
	I1213 10:11:31.324093  908469 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/addons-054604/proxy-client.key
	I1213 10:11:31.324112  908469 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/addons-054604/proxy-client.crt with IP's: []
	I1213 10:11:31.502137  908469 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/addons-054604/proxy-client.crt ...
	I1213 10:11:31.502170  908469 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/addons-054604/proxy-client.crt: {Name:mk4628c1ee88d6ec7065762b64d62c762b9a6b0f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:11:31.502367  908469 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/addons-054604/proxy-client.key ...
	I1213 10:11:31.502381  908469 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/addons-054604/proxy-client.key: {Name:mk688ca218594e35f8f3b894ae5d1e13e60f38d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:11:31.502582  908469 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 10:11:31.502632  908469 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca.pem (1082 bytes)
	I1213 10:11:31.502659  908469 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/cert.pem (1123 bytes)
	I1213 10:11:31.502689  908469 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/key.pem (1675 bytes)
	I1213 10:11:31.503304  908469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 10:11:31.522438  908469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 10:11:31.541125  908469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 10:11:31.559408  908469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 10:11:31.577675  908469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/addons-054604/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1213 10:11:31.594965  908469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/addons-054604/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 10:11:31.612927  908469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/addons-054604/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 10:11:31.630613  908469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/addons-054604/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 10:11:31.647452  908469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 10:11:31.664201  908469 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 10:11:31.676848  908469 ssh_runner.go:195] Run: openssl version
	I1213 10:11:31.683356  908469 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:11:31.690968  908469 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 10:11:31.698663  908469 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:11:31.702391  908469 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 10:11 /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:11:31.702495  908469 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:11:31.744174  908469 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 10:11:31.751760  908469 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1213 10:11:31.759098  908469 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 10:11:31.763621  908469 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1213 10:11:31.763674  908469 kubeadm.go:401] StartCluster: {Name:addons-054604 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-054604 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:11:31.763753  908469 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 10:11:31.763822  908469 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 10:11:31.792258  908469 cri.go:89] found id: ""
	I1213 10:11:31.792334  908469 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 10:11:31.800076  908469 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 10:11:31.808153  908469 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 10:11:31.808261  908469 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 10:11:31.816142  908469 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 10:11:31.816212  908469 kubeadm.go:158] found existing configuration files:
	
	I1213 10:11:31.816286  908469 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 10:11:31.823718  908469 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 10:11:31.823809  908469 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 10:11:31.831461  908469 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 10:11:31.838595  908469 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 10:11:31.838681  908469 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 10:11:31.845992  908469 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 10:11:31.853656  908469 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 10:11:31.853763  908469 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 10:11:31.861091  908469 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 10:11:31.868712  908469 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 10:11:31.868786  908469 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 10:11:31.876136  908469 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 10:11:31.915793  908469 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1213 10:11:31.916321  908469 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 10:11:31.939613  908469 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 10:11:31.939951  908469 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 10:11:31.940079  908469 kubeadm.go:319] OS: Linux
	I1213 10:11:31.940147  908469 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 10:11:31.940204  908469 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 10:11:31.940257  908469 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 10:11:31.940316  908469 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 10:11:31.940367  908469 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 10:11:31.940434  908469 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 10:11:31.940526  908469 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 10:11:31.940601  908469 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 10:11:31.940685  908469 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 10:11:32.007330  908469 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 10:11:32.007450  908469 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 10:11:32.007551  908469 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 10:11:32.017500  908469 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 10:11:32.021744  908469 out.go:252]   - Generating certificates and keys ...
	I1213 10:11:32.021845  908469 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 10:11:32.021922  908469 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 10:11:33.570048  908469 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1213 10:11:33.938587  908469 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1213 10:11:34.308929  908469 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1213 10:11:35.272718  908469 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1213 10:11:35.701530  908469 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1213 10:11:35.701924  908469 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-054604 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1213 10:11:35.914605  908469 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1213 10:11:35.914967  908469 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-054604 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1213 10:11:36.272736  908469 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1213 10:11:36.872269  908469 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1213 10:11:37.216848  908469 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1213 10:11:37.217149  908469 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 10:11:37.756874  908469 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 10:11:37.856273  908469 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 10:11:38.137670  908469 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 10:11:38.363575  908469 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 10:11:38.839088  908469 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 10:11:38.839937  908469 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 10:11:38.842751  908469 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 10:11:38.846385  908469 out.go:252]   - Booting up control plane ...
	I1213 10:11:38.846507  908469 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 10:11:38.846597  908469 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 10:11:38.846673  908469 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 10:11:38.862616  908469 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 10:11:38.862990  908469 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 10:11:38.871185  908469 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 10:11:38.872157  908469 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 10:11:38.872575  908469 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 10:11:39.002283  908469 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 10:11:39.002406  908469 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 10:11:40.999191  908469 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 2.000924838s
	I1213 10:11:41.005826  908469 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1213 10:11:41.005925  908469 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1213 10:11:41.006015  908469 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1213 10:11:41.006094  908469 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1213 10:11:43.966494  908469 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.962017345s
	I1213 10:11:45.285788  908469 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.281867052s
	I1213 10:11:47.006223  908469 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.002240372s
	I1213 10:11:47.038114  908469 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1213 10:11:47.050877  908469 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1213 10:11:47.065871  908469 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1213 10:11:47.066082  908469 kubeadm.go:319] [mark-control-plane] Marking the node addons-054604 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1213 10:11:47.077388  908469 kubeadm.go:319] [bootstrap-token] Using token: bsvmhz.9ag4oa3ly42j26tf
	I1213 10:11:47.080331  908469 out.go:252]   - Configuring RBAC rules ...
	I1213 10:11:47.080456  908469 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1213 10:11:47.086431  908469 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1213 10:11:47.094289  908469 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1213 10:11:47.098261  908469 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1213 10:11:47.102478  908469 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1213 10:11:47.106907  908469 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1213 10:11:47.413629  908469 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1213 10:11:47.838952  908469 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1213 10:11:48.417792  908469 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1213 10:11:48.417812  908469 kubeadm.go:319] 
	I1213 10:11:48.417874  908469 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1213 10:11:48.417878  908469 kubeadm.go:319] 
	I1213 10:11:48.417955  908469 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1213 10:11:48.417959  908469 kubeadm.go:319] 
	I1213 10:11:48.417984  908469 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1213 10:11:48.418043  908469 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1213 10:11:48.418094  908469 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1213 10:11:48.418098  908469 kubeadm.go:319] 
	I1213 10:11:48.418152  908469 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1213 10:11:48.418157  908469 kubeadm.go:319] 
	I1213 10:11:48.418204  908469 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1213 10:11:48.418208  908469 kubeadm.go:319] 
	I1213 10:11:48.418260  908469 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1213 10:11:48.418336  908469 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1213 10:11:48.418408  908469 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1213 10:11:48.418412  908469 kubeadm.go:319] 
	I1213 10:11:48.418496  908469 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1213 10:11:48.418573  908469 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1213 10:11:48.418577  908469 kubeadm.go:319] 
	I1213 10:11:48.418660  908469 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token bsvmhz.9ag4oa3ly42j26tf \
	I1213 10:11:48.418763  908469 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:b3c7efe1ca5668711c134b6b98856894f548fd5af0cfb3bc5013f3facc637401 \
	I1213 10:11:48.418783  908469 kubeadm.go:319] 	--control-plane 
	I1213 10:11:48.418787  908469 kubeadm.go:319] 
	I1213 10:11:48.418871  908469 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1213 10:11:48.418875  908469 kubeadm.go:319] 
	I1213 10:11:48.418956  908469 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token bsvmhz.9ag4oa3ly42j26tf \
	I1213 10:11:48.419058  908469 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:b3c7efe1ca5668711c134b6b98856894f548fd5af0cfb3bc5013f3facc637401 
	I1213 10:11:48.422452  908469 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1213 10:11:48.422678  908469 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 10:11:48.422782  908469 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 10:11:48.422802  908469 cni.go:84] Creating CNI manager for ""
	I1213 10:11:48.422810  908469 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 10:11:48.425915  908469 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1213 10:11:48.428751  908469 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1213 10:11:48.432933  908469 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1213 10:11:48.432956  908469 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1213 10:11:48.447928  908469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1213 10:11:48.732302  908469 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1213 10:11:48.732394  908469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-054604 minikube.k8s.io/updated_at=2025_12_13T10_11_48_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=fb16b7642350f383695d44d1e88d7327f6f14453 minikube.k8s.io/name=addons-054604 minikube.k8s.io/primary=true
	I1213 10:11:48.732350  908469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 10:11:48.864582  908469 ops.go:34] apiserver oom_adj: -16
	I1213 10:11:48.864732  908469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 10:11:49.365271  908469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 10:11:49.865386  908469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 10:11:50.365645  908469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 10:11:50.865361  908469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 10:11:51.364696  908469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 10:11:51.865310  908469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 10:11:52.365402  908469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 10:11:52.865684  908469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 10:11:53.365240  908469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 10:11:53.499457  908469 kubeadm.go:1114] duration metric: took 4.767165787s to wait for elevateKubeSystemPrivileges
	I1213 10:11:53.499492  908469 kubeadm.go:403] duration metric: took 21.735822105s to StartCluster
	I1213 10:11:53.499510  908469 settings.go:142] acquiring lock: {Name:mk93988d167ba25bb331a8426f9b2f4ef25dd844 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:11:53.499625  908469 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22128-904040/kubeconfig
	I1213 10:11:53.499997  908469 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-904040/kubeconfig: {Name:mk623f80012ba74b924bdfcf4e2ec5178c2702f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:11:53.500237  908469 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 10:11:53.500379  908469 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1213 10:11:53.500629  908469 config.go:182] Loaded profile config "addons-054604": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 10:11:53.500670  908469 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1213 10:11:53.500748  908469 addons.go:70] Setting yakd=true in profile "addons-054604"
	I1213 10:11:53.500766  908469 addons.go:239] Setting addon yakd=true in "addons-054604"
	I1213 10:11:53.500794  908469 host.go:66] Checking if "addons-054604" exists ...
	I1213 10:11:53.501254  908469 cli_runner.go:164] Run: docker container inspect addons-054604 --format={{.State.Status}}
	I1213 10:11:53.501944  908469 addons.go:70] Setting metrics-server=true in profile "addons-054604"
	I1213 10:11:53.501961  908469 addons.go:70] Setting registry=true in profile "addons-054604"
	I1213 10:11:53.501971  908469 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-054604"
	I1213 10:11:53.501978  908469 addons.go:239] Setting addon registry=true in "addons-054604"
	I1213 10:11:53.501980  908469 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-054604"
	I1213 10:11:53.502007  908469 host.go:66] Checking if "addons-054604" exists ...
	I1213 10:11:53.502015  908469 addons.go:70] Setting cloud-spanner=true in profile "addons-054604"
	I1213 10:11:53.502028  908469 addons.go:239] Setting addon cloud-spanner=true in "addons-054604"
	I1213 10:11:53.502041  908469 host.go:66] Checking if "addons-054604" exists ...
	I1213 10:11:53.502432  908469 cli_runner.go:164] Run: docker container inspect addons-054604 --format={{.State.Status}}
	I1213 10:11:53.502449  908469 cli_runner.go:164] Run: docker container inspect addons-054604 --format={{.State.Status}}
	I1213 10:11:53.506773  908469 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-054604"
	I1213 10:11:53.506893  908469 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-054604"
	I1213 10:11:53.506953  908469 host.go:66] Checking if "addons-054604" exists ...
	I1213 10:11:53.502009  908469 host.go:66] Checking if "addons-054604" exists ...
	I1213 10:11:53.507484  908469 cli_runner.go:164] Run: docker container inspect addons-054604 --format={{.State.Status}}
	I1213 10:11:53.501951  908469 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-054604"
	I1213 10:11:53.513930  908469 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-054604"
	I1213 10:11:53.514004  908469 host.go:66] Checking if "addons-054604" exists ...
	I1213 10:11:53.501964  908469 addons.go:239] Setting addon metrics-server=true in "addons-054604"
	I1213 10:11:53.514950  908469 host.go:66] Checking if "addons-054604" exists ...
	I1213 10:11:53.515311  908469 cli_runner.go:164] Run: docker container inspect addons-054604 --format={{.State.Status}}
	I1213 10:11:53.514071  908469 addons.go:70] Setting default-storageclass=true in profile "addons-054604"
	I1213 10:11:53.515547  908469 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-054604"
	I1213 10:11:53.515812  908469 cli_runner.go:164] Run: docker container inspect addons-054604 --format={{.State.Status}}
	I1213 10:11:53.514080  908469 addons.go:70] Setting gcp-auth=true in profile "addons-054604"
	I1213 10:11:53.520826  908469 mustload.go:66] Loading cluster: addons-054604
	I1213 10:11:53.521090  908469 config.go:182] Loaded profile config "addons-054604": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 10:11:53.523619  908469 cli_runner.go:164] Run: docker container inspect addons-054604 --format={{.State.Status}}
	I1213 10:11:53.514080  908469 addons.go:70] Setting registry-creds=true in profile "addons-054604"
	I1213 10:11:53.514087  908469 addons.go:70] Setting ingress=true in profile "addons-054604"
	I1213 10:11:53.514093  908469 addons.go:70] Setting storage-provisioner=true in profile "addons-054604"
	I1213 10:11:53.527129  908469 addons.go:239] Setting addon storage-provisioner=true in "addons-054604"
	I1213 10:11:53.514101  908469 addons.go:70] Setting ingress-dns=true in profile "addons-054604"
	I1213 10:11:53.527202  908469 addons.go:239] Setting addon ingress-dns=true in "addons-054604"
	I1213 10:11:53.514101  908469 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-054604"
	I1213 10:11:53.527288  908469 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-054604"
	I1213 10:11:53.514107  908469 addons.go:70] Setting inspektor-gadget=true in profile "addons-054604"
	I1213 10:11:53.527373  908469 addons.go:239] Setting addon inspektor-gadget=true in "addons-054604"
	I1213 10:11:53.514108  908469 addons.go:70] Setting volcano=true in profile "addons-054604"
	I1213 10:11:53.527441  908469 addons.go:239] Setting addon volcano=true in "addons-054604"
	I1213 10:11:53.514114  908469 addons.go:70] Setting volumesnapshots=true in profile "addons-054604"
	I1213 10:11:53.527520  908469 addons.go:239] Setting addon volumesnapshots=true in "addons-054604"
	I1213 10:11:53.514492  908469 out.go:179] * Verifying Kubernetes components...
	I1213 10:11:53.514517  908469 cli_runner.go:164] Run: docker container inspect addons-054604 --format={{.State.Status}}
	I1213 10:11:53.514926  908469 cli_runner.go:164] Run: docker container inspect addons-054604 --format={{.State.Status}}
	I1213 10:11:53.549481  908469 host.go:66] Checking if "addons-054604" exists ...
	I1213 10:11:53.550121  908469 cli_runner.go:164] Run: docker container inspect addons-054604 --format={{.State.Status}}
	I1213 10:11:53.557672  908469 addons.go:239] Setting addon registry-creds=true in "addons-054604"
	I1213 10:11:53.557761  908469 host.go:66] Checking if "addons-054604" exists ...
	I1213 10:11:53.558356  908469 cli_runner.go:164] Run: docker container inspect addons-054604 --format={{.State.Status}}
	I1213 10:11:53.549497  908469 host.go:66] Checking if "addons-054604" exists ...
	I1213 10:11:53.549500  908469 host.go:66] Checking if "addons-054604" exists ...
	I1213 10:11:53.527064  908469 addons.go:239] Setting addon ingress=true in "addons-054604"
	I1213 10:11:53.588747  908469 host.go:66] Checking if "addons-054604" exists ...
	I1213 10:11:53.594461  908469 cli_runner.go:164] Run: docker container inspect addons-054604 --format={{.State.Status}}
	I1213 10:11:53.598165  908469 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:11:53.635515  908469 host.go:66] Checking if "addons-054604" exists ...
	I1213 10:11:53.636045  908469 cli_runner.go:164] Run: docker container inspect addons-054604 --format={{.State.Status}}
	I1213 10:11:53.658583  908469 host.go:66] Checking if "addons-054604" exists ...
	I1213 10:11:53.659118  908469 cli_runner.go:164] Run: docker container inspect addons-054604 --format={{.State.Status}}
	I1213 10:11:53.662037  908469 cli_runner.go:164] Run: docker container inspect addons-054604 --format={{.State.Status}}
	I1213 10:11:53.675320  908469 cli_runner.go:164] Run: docker container inspect addons-054604 --format={{.State.Status}}
	I1213 10:11:53.686224  908469 cli_runner.go:164] Run: docker container inspect addons-054604 --format={{.State.Status}}
	I1213 10:11:53.715435  908469 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.6
	I1213 10:11:53.743245  908469 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1213 10:11:53.750123  908469 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1213 10:11:53.753696  908469 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1213 10:11:53.757785  908469 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1213 10:11:53.757955  908469 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1213 10:11:53.758989  908469 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1213 10:11:53.785784  908469 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1213 10:11:53.785825  908469 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1213 10:11:53.785945  908469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054604
	I1213 10:11:53.793993  908469 host.go:66] Checking if "addons-054604" exists ...
	I1213 10:11:53.795921  908469 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1213 10:11:53.795947  908469 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1213 10:11:53.796006  908469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054604
	I1213 10:11:53.780882  908469 addons.go:239] Setting addon default-storageclass=true in "addons-054604"
	I1213 10:11:53.802121  908469 host.go:66] Checking if "addons-054604" exists ...
	I1213 10:11:53.802883  908469 cli_runner.go:164] Run: docker container inspect addons-054604 --format={{.State.Status}}
	I1213 10:11:53.808875  908469 out.go:179]   - Using image docker.io/registry:3.0.0
	I1213 10:11:53.812214  908469 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1213 10:11:53.812332  908469 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1213 10:11:53.812371  908469 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1213 10:11:53.812464  908469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054604
	I1213 10:11:53.780926  908469 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1213 10:11:53.816210  908469 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1213 10:11:53.816288  908469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054604
	I1213 10:11:53.816489  908469 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1213 10:11:53.816493  908469 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1213 10:11:53.874901  908469 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1213 10:11:53.876345  908469 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.47.0
	I1213 10:11:53.876736  908469 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1213 10:11:53.893928  908469 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1213 10:11:53.901094  908469 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1213 10:11:53.901166  908469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054604
	I1213 10:11:53.901669  908469 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1213 10:11:53.902801  908469 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 10:11:53.903061  908469 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.1
	I1213 10:11:53.903144  908469 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1213 10:11:53.906528  908469 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1213 10:11:53.905692  908469 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1213 10:11:53.908400  908469 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1213 10:11:53.905708  908469 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1213 10:11:53.908548  908469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054604
	I1213 10:11:53.906639  908469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054604
	I1213 10:11:53.910041  908469 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:11:53.910058  908469 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 10:11:53.910126  908469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054604
	I1213 10:11:53.911511  908469 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-054604"
	I1213 10:11:53.911575  908469 host.go:66] Checking if "addons-054604" exists ...
	I1213 10:11:53.912008  908469 cli_runner.go:164] Run: docker container inspect addons-054604 --format={{.State.Status}}
	W1213 10:11:53.912830  908469 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1213 10:11:53.916979  908469 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1213 10:11:53.934618  908469 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1213 10:11:53.934709  908469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054604
	I1213 10:11:53.916999  908469 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1213 10:11:53.937928  908469 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1213 10:11:53.938038  908469 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1213 10:11:53.939825  908469 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1213 10:11:53.940700  908469 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1213 10:11:53.940726  908469 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1213 10:11:53.940805  908469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054604
	I1213 10:11:53.972539  908469 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1213 10:11:53.979102  908469 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1213 10:11:53.982154  908469 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1213 10:11:53.982183  908469 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1213 10:11:53.982261  908469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054604
	I1213 10:11:53.990818  908469 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1213 10:11:53.990838  908469 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1213 10:11:53.990900  908469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054604
	I1213 10:11:54.011197  908469 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1213 10:11:54.011223  908469 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1213 10:11:54.011292  908469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054604
	I1213 10:11:54.057711  908469 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/addons-054604/id_rsa Username:docker}
	I1213 10:11:54.058329  908469 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/addons-054604/id_rsa Username:docker}
	I1213 10:11:54.103169  908469 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 10:11:54.103203  908469 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 10:11:54.103269  908469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054604
	I1213 10:11:54.151759  908469 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/addons-054604/id_rsa Username:docker}
	I1213 10:11:54.182947  908469 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/addons-054604/id_rsa Username:docker}
	I1213 10:11:54.194559  908469 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/addons-054604/id_rsa Username:docker}
	I1213 10:11:54.201131  908469 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1213 10:11:54.204927  908469 out.go:179]   - Using image docker.io/busybox:stable
	I1213 10:11:54.211092  908469 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1213 10:11:54.211118  908469 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1213 10:11:54.211185  908469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054604
	I1213 10:11:54.217708  908469 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/addons-054604/id_rsa Username:docker}
	I1213 10:11:54.230331  908469 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/addons-054604/id_rsa Username:docker}
	I1213 10:11:54.237877  908469 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/addons-054604/id_rsa Username:docker}
	I1213 10:11:54.249654  908469 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/addons-054604/id_rsa Username:docker}
	I1213 10:11:54.251180  908469 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/addons-054604/id_rsa Username:docker}
	I1213 10:11:54.276041  908469 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/addons-054604/id_rsa Username:docker}
	W1213 10:11:54.279548  908469 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1213 10:11:54.279590  908469 retry.go:31] will retry after 167.591346ms: ssh: handshake failed: EOF
	I1213 10:11:54.288166  908469 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/addons-054604/id_rsa Username:docker}
	I1213 10:11:54.290206  908469 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/addons-054604/id_rsa Username:docker}
	I1213 10:11:54.292899  908469 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/addons-054604/id_rsa Username:docker}
	W1213 10:11:54.294010  908469 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1213 10:11:54.294031  908469 retry.go:31] will retry after 339.894686ms: ssh: handshake failed: EOF
	I1213 10:11:54.303633  908469 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 10:11:54.313366  908469 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/addons-054604/id_rsa Username:docker}
	I1213 10:11:54.652585  908469 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1213 10:11:54.726593  908469 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1213 10:11:54.807003  908469 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1213 10:11:54.811902  908469 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:11:54.844630  908469 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:11:54.856012  908469 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1213 10:11:54.874064  908469 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1213 10:11:54.874089  908469 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1213 10:11:54.894129  908469 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1213 10:11:54.894159  908469 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1213 10:11:54.984896  908469 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1213 10:11:55.003069  908469 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1213 10:11:55.004628  908469 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1213 10:11:55.010692  908469 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1213 10:11:55.010721  908469 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1213 10:11:55.092446  908469 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1213 10:11:55.092473  908469 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1213 10:11:55.109595  908469 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1213 10:11:55.119094  908469 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.243354014s)
	I1213 10:11:55.119125  908469 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1213 10:11:55.120742  908469 node_ready.go:35] waiting up to 6m0s for node "addons-054604" to be "Ready" ...
	I1213 10:11:55.125184  908469 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1213 10:11:55.125212  908469 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1213 10:11:55.170807  908469 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1213 10:11:55.170887  908469 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1213 10:11:55.193981  908469 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1213 10:11:55.194052  908469 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1213 10:11:55.346404  908469 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1213 10:11:55.346479  908469 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1213 10:11:55.347699  908469 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1213 10:11:55.347756  908469 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1213 10:11:55.394304  908469 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1213 10:11:55.394382  908469 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1213 10:11:55.394723  908469 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1213 10:11:55.394773  908469 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1213 10:11:55.439263  908469 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1213 10:11:55.455215  908469 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1213 10:11:55.574344  908469 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1213 10:11:55.574369  908469 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1213 10:11:55.628738  908469 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1213 10:11:55.628766  908469 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1213 10:11:55.668071  908469 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-054604" context rescaled to 1 replicas
	I1213 10:11:55.680325  908469 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1213 10:11:55.680351  908469 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1213 10:11:55.731506  908469 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1213 10:11:55.896105  908469 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1213 10:11:55.896132  908469 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1213 10:11:55.959774  908469 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1213 10:11:55.959800  908469 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1213 10:11:56.063615  908469 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1213 10:11:56.063642  908469 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1213 10:11:56.151865  908469 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1213 10:11:56.151891  908469 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1213 10:11:56.259772  908469 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1213 10:11:56.314803  908469 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1213 10:11:56.314881  908469 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1213 10:11:56.463523  908469 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1213 10:11:56.463594  908469 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1213 10:11:56.607433  908469 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1213 10:11:56.607508  908469 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1213 10:11:56.721935  908469 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (1.995258102s)
	I1213 10:11:56.854728  908469 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1213 10:11:56.854751  908469 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1213 10:11:56.925209  908469 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1213 10:11:56.925237  908469 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1213 10:11:57.029836  908469 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1213 10:11:57.029915  908469 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	W1213 10:11:57.124173  908469 node_ready.go:57] node "addons-054604" has "Ready":"False" status (will retry)
	I1213 10:11:57.134210  908469 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	W1213 10:11:59.127769  908469 node_ready.go:57] node "addons-054604" has "Ready":"False" status (will retry)
	I1213 10:11:59.530707  908469 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.723670593s)
	I1213 10:11:59.530743  908469 addons.go:495] Verifying addon ingress=true in "addons-054604"
	I1213 10:11:59.530932  908469 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.719006406s)
	I1213 10:11:59.531171  908469 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.6865146s)
	I1213 10:11:59.531230  908469 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.6751864s)
	I1213 10:11:59.531325  908469 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (4.546389174s)
	I1213 10:11:59.531362  908469 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (4.526708489s)
	I1213 10:11:59.531395  908469 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.528302149s)
	I1213 10:11:59.531447  908469 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.42182856s)
	I1213 10:11:59.531557  908469 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.092267641s)
	I1213 10:11:59.531573  908469 addons.go:495] Verifying addon registry=true in "addons-054604"
	I1213 10:11:59.531955  908469 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.076709114s)
	I1213 10:11:59.531980  908469 addons.go:495] Verifying addon metrics-server=true in "addons-054604"
	I1213 10:11:59.532016  908469 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (3.800482768s)
	I1213 10:11:59.534256  908469 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-054604 service yakd-dashboard -n yakd-dashboard
	
	I1213 10:11:59.534339  908469 out.go:179] * Verifying ingress addon...
	I1213 10:11:59.534360  908469 out.go:179] * Verifying registry addon...
	I1213 10:11:59.539347  908469 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1213 10:11:59.540017  908469 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1213 10:11:59.562304  908469 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1213 10:11:59.562372  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:11:59.562886  908469 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1213 10:11:59.562908  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1213 10:11:59.594838  908469 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1213 10:11:59.669477  908469 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.409621169s)
	W1213 10:11:59.669614  908469 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1213 10:11:59.669653  908469 retry.go:31] will retry after 202.635343ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1213 10:11:59.862739  908469 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.728429394s)
	I1213 10:11:59.862777  908469 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-054604"
	I1213 10:11:59.866013  908469 out.go:179] * Verifying csi-hostpath-driver addon...
	I1213 10:11:59.869494  908469 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1213 10:11:59.873297  908469 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1213 10:11:59.888298  908469 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1213 10:11:59.888333  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:00.071553  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:00.073196  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:00.432921  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:00.545651  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:00.545801  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:00.873119  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:01.044003  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:01.044467  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:01.373496  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:01.404048  908469 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1213 10:12:01.404151  908469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054604
	I1213 10:12:01.420397  908469 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/addons-054604/id_rsa Username:docker}
	I1213 10:12:01.530541  908469 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1213 10:12:01.544980  908469 addons.go:239] Setting addon gcp-auth=true in "addons-054604"
	I1213 10:12:01.545027  908469 host.go:66] Checking if "addons-054604" exists ...
	I1213 10:12:01.545514  908469 cli_runner.go:164] Run: docker container inspect addons-054604 --format={{.State.Status}}
	I1213 10:12:01.547401  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:01.547804  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:01.563658  908469 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1213 10:12:01.563713  908469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054604
	I1213 10:12:01.583397  908469 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/addons-054604/id_rsa Username:docker}
	W1213 10:12:01.624335  908469 node_ready.go:57] node "addons-054604" has "Ready":"False" status (will retry)
	I1213 10:12:01.872961  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:02.043105  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:02.044446  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:02.372228  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:02.543739  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:02.543925  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:02.873255  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:03.045206  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:03.046256  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:03.110037  908469 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.546349139s)
	I1213 10:12:03.110342  908469 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.236963938s)
	I1213 10:12:03.113309  908469 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1213 10:12:03.116083  908469 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1213 10:12:03.118934  908469 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1213 10:12:03.118965  908469 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1213 10:12:03.133428  908469 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1213 10:12:03.133454  908469 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1213 10:12:03.148544  908469 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1213 10:12:03.148569  908469 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1213 10:12:03.162470  908469 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1213 10:12:03.372519  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:03.552311  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:03.553301  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1213 10:12:03.628582  908469 node_ready.go:57] node "addons-054604" has "Ready":"False" status (will retry)
	I1213 10:12:03.677675  908469 addons.go:495] Verifying addon gcp-auth=true in "addons-054604"
	I1213 10:12:03.681038  908469 out.go:179] * Verifying gcp-auth addon...
	I1213 10:12:03.684243  908469 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1213 10:12:03.687766  908469 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1213 10:12:03.687785  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:03.873056  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:04.042997  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:04.043159  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:04.187917  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:04.372950  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:04.544134  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:04.544383  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:04.687115  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:04.873891  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:05.044177  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:05.044543  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:05.187465  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:05.373278  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:05.543376  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:05.543656  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:05.687581  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:05.873053  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:06.043231  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:06.044558  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1213 10:12:06.124232  908469 node_ready.go:57] node "addons-054604" has "Ready":"False" status (will retry)
	I1213 10:12:06.187138  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:06.372951  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:06.543704  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:06.544094  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:06.687693  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:06.872698  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:07.043983  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:07.044197  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:07.188007  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:07.372928  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:07.543461  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:07.543929  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:07.687076  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:07.872808  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:08.044421  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:08.045356  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:08.188115  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:08.372938  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:08.544447  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:08.544534  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1213 10:12:08.624451  908469 node_ready.go:57] node "addons-054604" has "Ready":"False" status (will retry)
	I1213 10:12:08.687294  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:08.873117  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:09.043366  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:09.043699  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:09.187357  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:09.372224  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:09.543169  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:09.543225  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:09.687723  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:09.872530  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:10.043743  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:10.044670  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:10.187920  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:10.372808  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:10.543979  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:10.544669  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:10.687354  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:10.873617  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:11.044212  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:11.044283  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1213 10:12:11.124050  908469 node_ready.go:57] node "addons-054604" has "Ready":"False" status (will retry)
	I1213 10:12:11.188068  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:11.372854  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:11.543986  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:11.544081  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:11.687656  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:11.872699  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:12.043916  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:12.044148  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:12.187721  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:12.373092  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:12.544687  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:12.544867  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:12.687862  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:12.875242  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:13.043464  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:13.043658  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:13.187305  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:13.373242  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:13.543847  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:13.544061  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1213 10:12:13.624106  908469 node_ready.go:57] node "addons-054604" has "Ready":"False" status (will retry)
	I1213 10:12:13.688114  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:13.873084  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:14.043254  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:14.043413  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:14.187831  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:14.373250  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:14.543891  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:14.544132  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:14.688350  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:14.873624  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:15.047340  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:15.047522  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:15.187482  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:15.372612  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:15.544103  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:15.544501  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:15.687998  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:15.873071  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:16.043567  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:16.044064  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1213 10:12:16.123500  908469 node_ready.go:57] node "addons-054604" has "Ready":"False" status (will retry)
	I1213 10:12:16.187712  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:16.372951  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:16.547265  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:16.547500  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:16.687193  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:16.873606  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:17.043788  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:17.043853  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:17.187350  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:17.373779  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:17.544935  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:17.545024  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:17.687709  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:17.872457  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:18.044066  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:18.044267  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1213 10:12:18.124150  908469 node_ready.go:57] node "addons-054604" has "Ready":"False" status (will retry)
	I1213 10:12:18.188145  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:18.373219  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:18.543853  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:18.544113  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:18.687943  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:18.873103  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:19.043098  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:19.043877  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:19.187357  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:19.373257  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:19.543589  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:19.543706  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:19.687285  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:19.873342  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:20.043598  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:20.043870  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:20.189297  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:20.373575  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:20.543676  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:20.544339  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1213 10:12:20.624627  908469 node_ready.go:57] node "addons-054604" has "Ready":"False" status (will retry)
	I1213 10:12:20.687252  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:20.873407  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:21.044128  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:21.044127  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:21.188022  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:21.372957  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:21.543405  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:21.543896  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:21.687602  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:21.872328  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:22.044581  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:22.046010  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:22.187703  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:22.372451  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:22.544808  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:22.545516  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:22.687022  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:22.873349  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:23.043739  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:23.043856  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1213 10:12:23.123985  908469 node_ready.go:57] node "addons-054604" has "Ready":"False" status (will retry)
	I1213 10:12:23.187697  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:23.373339  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:23.543883  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:23.544270  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:23.688094  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:23.872939  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:24.043381  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:24.043619  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:24.187719  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:24.372499  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:24.543521  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:24.543782  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:24.687715  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:24.872813  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:25.049744  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:25.050049  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1213 10:12:25.124237  908469 node_ready.go:57] node "addons-054604" has "Ready":"False" status (will retry)
	I1213 10:12:25.187062  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:25.372973  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:25.543173  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:25.543453  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:25.687131  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:25.873345  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:26.043610  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:26.043763  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:26.187787  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:26.372595  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:26.544096  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:26.544370  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:26.687881  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:26.873409  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:27.043857  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:27.044127  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:27.187711  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:27.372817  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:27.543389  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:27.543829  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1213 10:12:27.623499  908469 node_ready.go:57] node "addons-054604" has "Ready":"False" status (will retry)
	I1213 10:12:27.687350  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:27.873012  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:28.043668  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:28.043816  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:28.187720  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:28.372552  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:28.543910  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:28.544067  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:28.687725  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:28.873313  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:29.043391  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:29.043622  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:29.187328  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:29.372536  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:29.543501  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:29.543851  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:29.687098  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:29.873040  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:30.045191  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:30.045420  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1213 10:12:30.124965  908469 node_ready.go:57] node "addons-054604" has "Ready":"False" status (will retry)
	I1213 10:12:30.188152  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:30.372847  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:30.543447  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:30.543503  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:30.687855  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:30.872748  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:31.043952  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:31.044209  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:31.187212  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:31.373232  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:31.544357  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:31.544629  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:31.687294  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:31.872168  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:32.043698  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:32.043803  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:32.187496  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:32.372142  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:32.543347  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:32.543576  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1213 10:12:32.624582  908469 node_ready.go:57] node "addons-054604" has "Ready":"False" status (will retry)
	I1213 10:12:32.687612  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:32.872326  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:33.043508  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:33.043919  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:33.187274  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:33.373039  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:33.544151  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:33.544314  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:33.688207  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:33.872423  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:34.043510  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:34.043707  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:34.187538  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:34.397719  908469 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1213 10:12:34.397793  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:34.590625  908469 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1213 10:12:34.590780  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:34.590864  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:34.680701  908469 node_ready.go:49] node "addons-054604" is "Ready"
	I1213 10:12:34.680793  908469 node_ready.go:38] duration metric: took 39.560017143s for node "addons-054604" to be "Ready" ...
	I1213 10:12:34.680827  908469 api_server.go:52] waiting for apiserver process to appear ...
	I1213 10:12:34.680998  908469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:12:34.716229  908469 api_server.go:72] duration metric: took 41.215954564s to wait for apiserver process to appear ...
	I1213 10:12:34.716269  908469 api_server.go:88] waiting for apiserver healthz status ...
	I1213 10:12:34.716307  908469 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1213 10:12:34.718346  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:34.730648  908469 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1213 10:12:34.731863  908469 api_server.go:141] control plane version: v1.34.2
	I1213 10:12:34.731887  908469 api_server.go:131] duration metric: took 15.609579ms to wait for apiserver health ...
	I1213 10:12:34.731896  908469 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 10:12:34.750910  908469 system_pods.go:59] 19 kube-system pods found
	I1213 10:12:34.750949  908469 system_pods.go:61] "coredns-66bc5c9577-t662h" [ed1b0e90-ee52-4fca-af1a-1a6ebe350efa] Pending
	I1213 10:12:34.750983  908469 system_pods.go:61] "csi-hostpath-attacher-0" [a46ff09e-da25-4c79-9691-00e866c026a9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1213 10:12:34.750995  908469 system_pods.go:61] "csi-hostpath-resizer-0" [f308e519-8eb9-4ed3-b00e-bea81357ccf2] Pending
	I1213 10:12:34.751001  908469 system_pods.go:61] "csi-hostpathplugin-8fv49" [853a35ff-28df-45cb-b34b-fa2eac6bce76] Pending
	I1213 10:12:34.751007  908469 system_pods.go:61] "etcd-addons-054604" [af24a122-e2ef-4b2a-8b9f-cc98cee3c494] Running
	I1213 10:12:34.751018  908469 system_pods.go:61] "kindnet-wx4r9" [a806d7b5-a124-4837-97ec-c315ca041ed7] Running
	I1213 10:12:34.751022  908469 system_pods.go:61] "kube-apiserver-addons-054604" [c4bb91c6-22e8-4695-bc48-51cfe3d18458] Running
	I1213 10:12:34.751026  908469 system_pods.go:61] "kube-controller-manager-addons-054604" [5313760b-2899-4b00-8740-c88adbdc9b1b] Running
	I1213 10:12:34.751031  908469 system_pods.go:61] "kube-ingress-dns-minikube" [cd037993-329b-40e5-ad1b-458335cb925e] Pending
	I1213 10:12:34.751034  908469 system_pods.go:61] "kube-proxy-hp7zc" [2d246c30-0c4f-426d-956d-1b053698d54f] Running
	I1213 10:12:34.751038  908469 system_pods.go:61] "kube-scheduler-addons-054604" [28260261-42b5-4215-a767-25d752dc219c] Running
	I1213 10:12:34.751062  908469 system_pods.go:61] "metrics-server-85b7d694d7-2ppdp" [55d8b817-f36a-4527-b64b-aabcc328810b] Pending
	I1213 10:12:34.751068  908469 system_pods.go:61] "nvidia-device-plugin-daemonset-gzjcp" [b3e1a7fd-9954-4567-821c-410525dd004c] Pending
	I1213 10:12:34.751084  908469 system_pods.go:61] "registry-6b586f9694-4bxkh" [74643ad6-13cc-45ef-ad16-f7ecd0873ff9] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1213 10:12:34.751098  908469 system_pods.go:61] "registry-creds-764b6fb674-2htf4" [29d9b0f9-2ffa-4a0e-86e7-7a0f5b8da4a9] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1213 10:12:34.751103  908469 system_pods.go:61] "registry-proxy-xclch" [d5e8dae3-581e-4d96-b092-ed60f94f3d00] Pending
	I1213 10:12:34.751108  908469 system_pods.go:61] "snapshot-controller-7d9fbc56b8-8tp2d" [4159ab23-1000-4f10-8edc-ea73af07f77d] Pending
	I1213 10:12:34.751112  908469 system_pods.go:61] "snapshot-controller-7d9fbc56b8-bbmhp" [88ffaba8-cffd-4b27-ab19-843f22b84185] Pending
	I1213 10:12:34.751122  908469 system_pods.go:61] "storage-provisioner" [4c794042-57f4-49aa-8f64-71725002278e] Pending
	I1213 10:12:34.751128  908469 system_pods.go:74] duration metric: took 19.225573ms to wait for pod list to return data ...
	I1213 10:12:34.751135  908469 default_sa.go:34] waiting for default service account to be created ...
	I1213 10:12:34.754412  908469 default_sa.go:45] found service account: "default"
	I1213 10:12:34.754446  908469 default_sa.go:55] duration metric: took 3.27761ms for default service account to be created ...
	I1213 10:12:34.754456  908469 system_pods.go:116] waiting for k8s-apps to be running ...
	I1213 10:12:34.787447  908469 system_pods.go:86] 19 kube-system pods found
	I1213 10:12:34.787501  908469 system_pods.go:89] "coredns-66bc5c9577-t662h" [ed1b0e90-ee52-4fca-af1a-1a6ebe350efa] Pending
	I1213 10:12:34.787510  908469 system_pods.go:89] "csi-hostpath-attacher-0" [a46ff09e-da25-4c79-9691-00e866c026a9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1213 10:12:34.787515  908469 system_pods.go:89] "csi-hostpath-resizer-0" [f308e519-8eb9-4ed3-b00e-bea81357ccf2] Pending
	I1213 10:12:34.787520  908469 system_pods.go:89] "csi-hostpathplugin-8fv49" [853a35ff-28df-45cb-b34b-fa2eac6bce76] Pending
	I1213 10:12:34.787563  908469 system_pods.go:89] "etcd-addons-054604" [af24a122-e2ef-4b2a-8b9f-cc98cee3c494] Running
	I1213 10:12:34.787568  908469 system_pods.go:89] "kindnet-wx4r9" [a806d7b5-a124-4837-97ec-c315ca041ed7] Running
	I1213 10:12:34.787579  908469 system_pods.go:89] "kube-apiserver-addons-054604" [c4bb91c6-22e8-4695-bc48-51cfe3d18458] Running
	I1213 10:12:34.787583  908469 system_pods.go:89] "kube-controller-manager-addons-054604" [5313760b-2899-4b00-8740-c88adbdc9b1b] Running
	I1213 10:12:34.787587  908469 system_pods.go:89] "kube-ingress-dns-minikube" [cd037993-329b-40e5-ad1b-458335cb925e] Pending
	I1213 10:12:34.787592  908469 system_pods.go:89] "kube-proxy-hp7zc" [2d246c30-0c4f-426d-956d-1b053698d54f] Running
	I1213 10:12:34.787596  908469 system_pods.go:89] "kube-scheduler-addons-054604" [28260261-42b5-4215-a767-25d752dc219c] Running
	I1213 10:12:34.787609  908469 system_pods.go:89] "metrics-server-85b7d694d7-2ppdp" [55d8b817-f36a-4527-b64b-aabcc328810b] Pending
	I1213 10:12:34.787613  908469 system_pods.go:89] "nvidia-device-plugin-daemonset-gzjcp" [b3e1a7fd-9954-4567-821c-410525dd004c] Pending
	I1213 10:12:34.787660  908469 system_pods.go:89] "registry-6b586f9694-4bxkh" [74643ad6-13cc-45ef-ad16-f7ecd0873ff9] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1213 10:12:34.787678  908469 system_pods.go:89] "registry-creds-764b6fb674-2htf4" [29d9b0f9-2ffa-4a0e-86e7-7a0f5b8da4a9] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1213 10:12:34.787683  908469 system_pods.go:89] "registry-proxy-xclch" [d5e8dae3-581e-4d96-b092-ed60f94f3d00] Pending
	I1213 10:12:34.787687  908469 system_pods.go:89] "snapshot-controller-7d9fbc56b8-8tp2d" [4159ab23-1000-4f10-8edc-ea73af07f77d] Pending
	I1213 10:12:34.787690  908469 system_pods.go:89] "snapshot-controller-7d9fbc56b8-bbmhp" [88ffaba8-cffd-4b27-ab19-843f22b84185] Pending
	I1213 10:12:34.787693  908469 system_pods.go:89] "storage-provisioner" [4c794042-57f4-49aa-8f64-71725002278e] Pending
	I1213 10:12:34.787720  908469 retry.go:31] will retry after 214.288949ms: missing components: kube-dns
	I1213 10:12:34.883223  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:35.027498  908469 system_pods.go:86] 19 kube-system pods found
	I1213 10:12:35.027555  908469 system_pods.go:89] "coredns-66bc5c9577-t662h" [ed1b0e90-ee52-4fca-af1a-1a6ebe350efa] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 10:12:35.027565  908469 system_pods.go:89] "csi-hostpath-attacher-0" [a46ff09e-da25-4c79-9691-00e866c026a9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1213 10:12:35.027571  908469 system_pods.go:89] "csi-hostpath-resizer-0" [f308e519-8eb9-4ed3-b00e-bea81357ccf2] Pending
	I1213 10:12:35.027575  908469 system_pods.go:89] "csi-hostpathplugin-8fv49" [853a35ff-28df-45cb-b34b-fa2eac6bce76] Pending
	I1213 10:12:35.027579  908469 system_pods.go:89] "etcd-addons-054604" [af24a122-e2ef-4b2a-8b9f-cc98cee3c494] Running
	I1213 10:12:35.027585  908469 system_pods.go:89] "kindnet-wx4r9" [a806d7b5-a124-4837-97ec-c315ca041ed7] Running
	I1213 10:12:35.027613  908469 system_pods.go:89] "kube-apiserver-addons-054604" [c4bb91c6-22e8-4695-bc48-51cfe3d18458] Running
	I1213 10:12:35.027626  908469 system_pods.go:89] "kube-controller-manager-addons-054604" [5313760b-2899-4b00-8740-c88adbdc9b1b] Running
	I1213 10:12:35.027633  908469 system_pods.go:89] "kube-ingress-dns-minikube" [cd037993-329b-40e5-ad1b-458335cb925e] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1213 10:12:35.027637  908469 system_pods.go:89] "kube-proxy-hp7zc" [2d246c30-0c4f-426d-956d-1b053698d54f] Running
	I1213 10:12:35.027642  908469 system_pods.go:89] "kube-scheduler-addons-054604" [28260261-42b5-4215-a767-25d752dc219c] Running
	I1213 10:12:35.027656  908469 system_pods.go:89] "metrics-server-85b7d694d7-2ppdp" [55d8b817-f36a-4527-b64b-aabcc328810b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 10:12:35.027660  908469 system_pods.go:89] "nvidia-device-plugin-daemonset-gzjcp" [b3e1a7fd-9954-4567-821c-410525dd004c] Pending
	I1213 10:12:35.027666  908469 system_pods.go:89] "registry-6b586f9694-4bxkh" [74643ad6-13cc-45ef-ad16-f7ecd0873ff9] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1213 10:12:35.027691  908469 system_pods.go:89] "registry-creds-764b6fb674-2htf4" [29d9b0f9-2ffa-4a0e-86e7-7a0f5b8da4a9] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1213 10:12:35.027709  908469 system_pods.go:89] "registry-proxy-xclch" [d5e8dae3-581e-4d96-b092-ed60f94f3d00] Pending
	I1213 10:12:35.027720  908469 system_pods.go:89] "snapshot-controller-7d9fbc56b8-8tp2d" [4159ab23-1000-4f10-8edc-ea73af07f77d] Pending
	I1213 10:12:35.027728  908469 system_pods.go:89] "snapshot-controller-7d9fbc56b8-bbmhp" [88ffaba8-cffd-4b27-ab19-843f22b84185] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1213 10:12:35.027732  908469 system_pods.go:89] "storage-provisioner" [4c794042-57f4-49aa-8f64-71725002278e] Pending
	I1213 10:12:35.027755  908469 retry.go:31] will retry after 285.292541ms: missing components: kube-dns
	I1213 10:12:35.056284  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:35.056922  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:35.197979  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:35.318926  908469 system_pods.go:86] 19 kube-system pods found
	I1213 10:12:35.318972  908469 system_pods.go:89] "coredns-66bc5c9577-t662h" [ed1b0e90-ee52-4fca-af1a-1a6ebe350efa] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 10:12:35.318988  908469 system_pods.go:89] "csi-hostpath-attacher-0" [a46ff09e-da25-4c79-9691-00e866c026a9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1213 10:12:35.320717  908469 system_pods.go:89] "csi-hostpath-resizer-0" [f308e519-8eb9-4ed3-b00e-bea81357ccf2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1213 10:12:35.320748  908469 system_pods.go:89] "csi-hostpathplugin-8fv49" [853a35ff-28df-45cb-b34b-fa2eac6bce76] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1213 10:12:35.320787  908469 system_pods.go:89] "etcd-addons-054604" [af24a122-e2ef-4b2a-8b9f-cc98cee3c494] Running
	I1213 10:12:35.320802  908469 system_pods.go:89] "kindnet-wx4r9" [a806d7b5-a124-4837-97ec-c315ca041ed7] Running
	I1213 10:12:35.320807  908469 system_pods.go:89] "kube-apiserver-addons-054604" [c4bb91c6-22e8-4695-bc48-51cfe3d18458] Running
	I1213 10:12:35.320829  908469 system_pods.go:89] "kube-controller-manager-addons-054604" [5313760b-2899-4b00-8740-c88adbdc9b1b] Running
	I1213 10:12:35.320843  908469 system_pods.go:89] "kube-ingress-dns-minikube" [cd037993-329b-40e5-ad1b-458335cb925e] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1213 10:12:35.320852  908469 system_pods.go:89] "kube-proxy-hp7zc" [2d246c30-0c4f-426d-956d-1b053698d54f] Running
	I1213 10:12:35.320858  908469 system_pods.go:89] "kube-scheduler-addons-054604" [28260261-42b5-4215-a767-25d752dc219c] Running
	I1213 10:12:35.320866  908469 system_pods.go:89] "metrics-server-85b7d694d7-2ppdp" [55d8b817-f36a-4527-b64b-aabcc328810b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 10:12:35.320884  908469 system_pods.go:89] "nvidia-device-plugin-daemonset-gzjcp" [b3e1a7fd-9954-4567-821c-410525dd004c] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1213 10:12:35.320911  908469 system_pods.go:89] "registry-6b586f9694-4bxkh" [74643ad6-13cc-45ef-ad16-f7ecd0873ff9] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1213 10:12:35.320933  908469 system_pods.go:89] "registry-creds-764b6fb674-2htf4" [29d9b0f9-2ffa-4a0e-86e7-7a0f5b8da4a9] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1213 10:12:35.320946  908469 system_pods.go:89] "registry-proxy-xclch" [d5e8dae3-581e-4d96-b092-ed60f94f3d00] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1213 10:12:35.320954  908469 system_pods.go:89] "snapshot-controller-7d9fbc56b8-8tp2d" [4159ab23-1000-4f10-8edc-ea73af07f77d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1213 10:12:35.320975  908469 system_pods.go:89] "snapshot-controller-7d9fbc56b8-bbmhp" [88ffaba8-cffd-4b27-ab19-843f22b84185] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1213 10:12:35.320983  908469 system_pods.go:89] "storage-provisioner" [4c794042-57f4-49aa-8f64-71725002278e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 10:12:35.320992  908469 system_pods.go:126] duration metric: took 566.506729ms to wait for k8s-apps to be running ...
	I1213 10:12:35.321015  908469 system_svc.go:44] waiting for kubelet service to be running ....
	I1213 10:12:35.321090  908469 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 10:12:35.347112  908469 system_svc.go:56] duration metric: took 26.085789ms WaitForService to wait for kubelet
	I1213 10:12:35.347190  908469 kubeadm.go:587] duration metric: took 41.846919393s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 10:12:35.347228  908469 node_conditions.go:102] verifying NodePressure condition ...
	I1213 10:12:35.350484  908469 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1213 10:12:35.350565  908469 node_conditions.go:123] node cpu capacity is 2
	I1213 10:12:35.350605  908469 node_conditions.go:105] duration metric: took 3.353976ms to run NodePressure ...
	I1213 10:12:35.350640  908469 start.go:242] waiting for startup goroutines ...
	I1213 10:12:35.423179  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:35.548337  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:35.548553  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:35.687413  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:35.873238  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:36.045632  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:36.045941  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:36.188270  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:36.374188  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:36.543151  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:36.544259  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:36.687282  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:36.877695  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:37.044202  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:37.045036  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:37.188221  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:37.379932  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:37.546660  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:37.547467  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:37.688498  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:37.873357  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:38.046561  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:38.047027  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:38.192205  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:38.374079  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:38.547223  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:38.550528  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:38.691294  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:38.876186  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:39.046124  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:39.046386  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:39.189807  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:39.375775  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:39.547890  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:39.548243  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:39.689010  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:39.874338  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:40.060241  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:40.061092  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:40.188707  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:40.373042  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:40.543588  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:40.545252  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:40.687800  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:40.873294  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:41.045320  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:41.045446  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:41.187383  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:41.372617  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:41.544406  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:41.544784  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:41.688286  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:41.872472  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:42.045582  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:42.045742  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:42.198041  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:42.373080  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:42.544214  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:42.544324  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:42.687731  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:42.873666  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:43.047469  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:43.048081  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:43.189044  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:43.373866  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:43.544469  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:43.544945  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:43.688315  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:43.873329  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:44.044843  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:44.046035  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:44.188621  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:44.372587  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:44.544380  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:44.544823  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:44.687944  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:44.873109  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:45.047180  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:45.047670  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:45.194310  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:45.374493  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:45.546181  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:45.546613  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:45.688869  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:45.873017  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:46.045773  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:46.046177  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:46.187349  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:46.372407  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:46.544253  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:46.544419  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:46.687433  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:46.873137  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:47.044457  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:47.044595  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:47.187801  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:47.373655  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:47.545030  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:47.545224  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:47.688386  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:47.873481  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:48.044679  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:48.045738  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:48.188945  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:48.373733  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:48.544530  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:48.544880  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:48.688449  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:48.873386  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:49.057251  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:49.057838  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:49.188065  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:49.374358  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:49.545753  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:49.546195  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:49.693213  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:49.873684  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:50.045174  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:50.045264  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:50.188418  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:50.373459  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:50.544764  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:50.545136  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:50.688315  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:50.874208  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:51.046780  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:51.050443  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:51.187831  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:51.373579  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:51.543812  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:51.544964  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:51.690260  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:51.873093  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:52.045684  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:52.046318  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:52.188311  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:52.374048  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:52.544561  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:52.544718  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:52.688695  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:52.873403  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:53.046046  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:53.046442  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:53.187418  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:53.373139  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:53.544889  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:53.545324  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:53.689308  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:53.873251  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:54.045938  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:54.046593  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:54.215842  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:54.373707  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:54.545841  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:54.546221  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:54.687200  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:54.873950  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:55.044551  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:55.044719  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:55.187650  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:55.372964  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:55.546360  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:55.548448  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:55.688831  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:55.873526  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:56.045911  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:56.046346  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:56.187233  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:56.373159  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:56.544451  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:56.544660  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:56.687981  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:56.873646  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:57.045105  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:57.045568  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:57.188017  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:57.373474  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:57.544168  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:57.544219  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:57.691682  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:57.873860  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:58.046748  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:58.047352  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:58.187799  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:58.373502  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:58.545194  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:58.545347  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:58.686989  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:58.873164  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:59.044208  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:59.044353  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:59.186981  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:59.373465  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:59.554312  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:59.554727  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:59.688201  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:59.874124  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:00.054419  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:13:00.058913  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:13:00.199344  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:13:00.380081  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:00.544510  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:13:00.545128  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:13:00.687215  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:13:00.873431  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:01.047517  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:13:01.048373  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:13:01.187792  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:13:01.373522  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:01.545473  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:13:01.545873  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:13:01.688365  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:13:01.873305  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:02.045168  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:13:02.046191  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:13:02.188684  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:13:02.373851  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:02.544794  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:13:02.546177  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:13:02.690990  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:13:02.875966  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:03.050279  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:13:03.050671  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:13:03.196068  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:13:03.386950  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:03.547166  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:13:03.547607  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:13:03.688241  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:13:03.878777  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:04.044888  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:13:04.045015  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:13:04.188872  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:13:04.373777  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:04.544476  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:13:04.544620  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:13:04.687952  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:13:04.873669  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:05.044706  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:13:05.045102  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:13:05.188219  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:13:05.373628  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:05.544514  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:13:05.544666  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:13:05.687976  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:13:05.873663  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:06.043961  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:13:06.044576  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:13:06.188425  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:13:06.374346  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:06.544781  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:13:06.544945  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:13:06.689518  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:13:06.873177  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:07.044412  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:13:07.044700  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:13:07.192141  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:13:07.373353  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:07.543773  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:13:07.544053  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:13:07.692315  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:13:07.874382  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:08.044708  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:13:08.044743  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:13:08.187562  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:13:08.372798  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:08.545900  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:13:08.546363  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:13:08.687431  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:13:08.873879  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:09.045402  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:13:09.045791  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:13:09.188458  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:13:09.373820  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:09.543297  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:13:09.543406  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:13:09.687766  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:13:09.873713  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:10.045015  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:13:10.047218  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:13:10.188404  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:13:10.373931  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:10.543193  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:13:10.544604  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:13:10.687427  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:13:10.876551  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:11.044394  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:13:11.044539  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:13:11.187673  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:13:11.372860  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:11.544419  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:13:11.544525  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:13:11.687585  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:13:11.873078  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:12.044395  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:13:12.044828  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:13:12.187830  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:13:12.373789  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:12.545064  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:13:12.545374  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:13:12.687138  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:13:12.874214  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:13.044245  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:13:13.044575  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:13:13.187502  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:13:13.376087  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:13.555146  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:13:13.555549  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:13:13.689697  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:13:13.879031  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:14.043676  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:13:14.045018  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:13:14.188501  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:13:14.373850  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:14.545662  908469 kapi.go:107] duration metric: took 1m15.005640056s to wait for kubernetes.io/minikube-addons=registry ...
	I1213 10:13:14.546374  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:13:14.687767  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:13:14.874280  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:15.045344  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:13:15.188185  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:13:15.374268  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:15.544431  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:13:15.687557  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:13:15.873268  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:16.044372  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:13:16.188378  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:13:16.373824  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:16.543872  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:13:16.687485  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:13:16.873433  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:17.043358  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:13:17.187977  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:13:17.375174  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:17.543469  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:13:17.691736  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:13:17.875512  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:18.043733  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:13:18.188423  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:13:18.372606  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:18.546961  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:13:18.687966  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:13:18.874917  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:19.045139  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:13:19.186883  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:13:19.373732  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:19.544392  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:13:19.687993  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:13:19.874057  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:20.043663  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:13:20.193310  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:13:20.373177  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:20.543542  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:13:20.687621  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:13:20.873298  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:21.046380  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:13:21.187371  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:13:21.373633  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:21.551888  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:13:21.696721  908469 kapi.go:107] duration metric: took 1m18.012476607s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1213 10:13:21.700445  908469 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-054604 cluster.
	I1213 10:13:21.703406  908469 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1213 10:13:21.706394  908469 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1213 10:13:21.873347  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:22.043449  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:13:22.372509  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:22.544171  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:13:22.873828  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:23.044154  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:13:23.373379  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:23.543787  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:13:23.873687  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:24.044038  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:13:24.373625  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:24.543950  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:13:24.873457  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:25.043296  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:13:25.372427  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:25.543842  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:13:25.874024  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:26.043563  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:13:26.373745  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:26.544186  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:13:26.873600  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:27.044097  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:13:27.377438  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:27.546884  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:13:27.873791  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:28.046090  908469 kapi.go:107] duration metric: took 1m28.506742164s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1213 10:13:28.378851  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:28.878885  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:29.373036  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:29.873220  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:30.373489  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:30.873792  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:31.373910  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:31.873830  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:32.373755  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:32.878179  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:33.373062  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:33.873040  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:34.372890  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:34.873916  908469 kapi.go:107] duration metric: took 1m35.004421787s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1213 10:13:34.876983  908469 out.go:179] * Enabled addons: registry-creds, cloud-spanner, storage-provisioner, ingress-dns, inspektor-gadget, amd-gpu-device-plugin, nvidia-device-plugin, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I1213 10:13:34.879871  908469 addons.go:530] duration metric: took 1m41.379193757s for enable addons: enabled=[registry-creds cloud-spanner storage-provisioner ingress-dns inspektor-gadget amd-gpu-device-plugin nvidia-device-plugin metrics-server yakd storage-provisioner-rancher volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I1213 10:13:34.879947  908469 start.go:247] waiting for cluster config update ...
	I1213 10:13:34.880003  908469 start.go:256] writing updated cluster config ...
	I1213 10:13:34.880347  908469 ssh_runner.go:195] Run: rm -f paused
	I1213 10:13:34.883959  908469 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 10:13:34.887623  908469 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-t662h" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 10:13:34.892854  908469 pod_ready.go:94] pod "coredns-66bc5c9577-t662h" is "Ready"
	I1213 10:13:34.892951  908469 pod_ready.go:86] duration metric: took 5.298311ms for pod "coredns-66bc5c9577-t662h" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 10:13:34.895604  908469 pod_ready.go:83] waiting for pod "etcd-addons-054604" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 10:13:34.900461  908469 pod_ready.go:94] pod "etcd-addons-054604" is "Ready"
	I1213 10:13:34.900492  908469 pod_ready.go:86] duration metric: took 4.859364ms for pod "etcd-addons-054604" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 10:13:34.903122  908469 pod_ready.go:83] waiting for pod "kube-apiserver-addons-054604" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 10:13:34.907949  908469 pod_ready.go:94] pod "kube-apiserver-addons-054604" is "Ready"
	I1213 10:13:34.907981  908469 pod_ready.go:86] duration metric: took 4.829136ms for pod "kube-apiserver-addons-054604" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 10:13:34.910511  908469 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-054604" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 10:13:35.288428  908469 pod_ready.go:94] pod "kube-controller-manager-addons-054604" is "Ready"
	I1213 10:13:35.288458  908469 pod_ready.go:86] duration metric: took 377.917606ms for pod "kube-controller-manager-addons-054604" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 10:13:35.488097  908469 pod_ready.go:83] waiting for pod "kube-proxy-hp7zc" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 10:13:35.888369  908469 pod_ready.go:94] pod "kube-proxy-hp7zc" is "Ready"
	I1213 10:13:35.888398  908469 pod_ready.go:86] duration metric: took 400.267614ms for pod "kube-proxy-hp7zc" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 10:13:36.088286  908469 pod_ready.go:83] waiting for pod "kube-scheduler-addons-054604" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 10:13:36.487416  908469 pod_ready.go:94] pod "kube-scheduler-addons-054604" is "Ready"
	I1213 10:13:36.487464  908469 pod_ready.go:86] duration metric: took 399.130501ms for pod "kube-scheduler-addons-054604" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 10:13:36.487504  908469 pod_ready.go:40] duration metric: took 1.603509098s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 10:13:36.540269  908469 start.go:625] kubectl: 1.33.2, cluster: 1.34.2 (minor skew: 1)
	I1213 10:13:36.543395  908469 out.go:179] * Done! kubectl is now configured to use "addons-054604" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 13 10:15:48 addons-054604 crio[831]: time="2025-12-13T10:15:48.002302264Z" level=info msg="Removed pod sandbox: 493e7f176dcaf792d52eace77d201a1fc8112aa5a7d00db56c95199703b28c45" id=05d11516-4159-41d7-8e88-e226d1bd25cc name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 13 10:16:33 addons-054604 crio[831]: time="2025-12-13T10:16:33.072948667Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-zntvf/POD" id=74ea13ba-9885-457f-84c6-66aef96e591d name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 13 10:16:33 addons-054604 crio[831]: time="2025-12-13T10:16:33.073020733Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 10:16:33 addons-054604 crio[831]: time="2025-12-13T10:16:33.098041982Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-zntvf Namespace:default ID:3a68a0afca9d7014a52b489c0d9f2cecff81bcf86237814af56c054921de6360 UID:73bff38a-d01f-4903-8b6c-bd8c7497030b NetNS:/var/run/netns/309251ac-3631-48d0-949a-85f930a2c841 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000c35d38}] Aliases:map[]}"
	Dec 13 10:16:33 addons-054604 crio[831]: time="2025-12-13T10:16:33.098098385Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-zntvf to CNI network \"kindnet\" (type=ptp)"
	Dec 13 10:16:33 addons-054604 crio[831]: time="2025-12-13T10:16:33.130540251Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-zntvf Namespace:default ID:3a68a0afca9d7014a52b489c0d9f2cecff81bcf86237814af56c054921de6360 UID:73bff38a-d01f-4903-8b6c-bd8c7497030b NetNS:/var/run/netns/309251ac-3631-48d0-949a-85f930a2c841 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000c35d38}] Aliases:map[]}"
	Dec 13 10:16:33 addons-054604 crio[831]: time="2025-12-13T10:16:33.13087641Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-zntvf for CNI network kindnet (type=ptp)"
	Dec 13 10:16:33 addons-054604 crio[831]: time="2025-12-13T10:16:33.13896951Z" level=info msg="Ran pod sandbox 3a68a0afca9d7014a52b489c0d9f2cecff81bcf86237814af56c054921de6360 with infra container: default/hello-world-app-5d498dc89-zntvf/POD" id=74ea13ba-9885-457f-84c6-66aef96e591d name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 13 10:16:33 addons-054604 crio[831]: time="2025-12-13T10:16:33.157970447Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=34f2b840-d1ad-4ff3-a6a4-6ec56007f81d name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:16:33 addons-054604 crio[831]: time="2025-12-13T10:16:33.158322763Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=34f2b840-d1ad-4ff3-a6a4-6ec56007f81d name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:16:33 addons-054604 crio[831]: time="2025-12-13T10:16:33.158462531Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=34f2b840-d1ad-4ff3-a6a4-6ec56007f81d name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:16:33 addons-054604 crio[831]: time="2025-12-13T10:16:33.16024258Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=5159f584-2beb-4718-8093-a0c9cef05691 name=/runtime.v1.ImageService/PullImage
	Dec 13 10:16:33 addons-054604 crio[831]: time="2025-12-13T10:16:33.164425981Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Dec 13 10:16:33 addons-054604 crio[831]: time="2025-12-13T10:16:33.850705938Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b" id=5159f584-2beb-4718-8093-a0c9cef05691 name=/runtime.v1.ImageService/PullImage
	Dec 13 10:16:33 addons-054604 crio[831]: time="2025-12-13T10:16:33.851676107Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=23964653-0c81-442e-951d-981c26a16379 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:16:33 addons-054604 crio[831]: time="2025-12-13T10:16:33.855263128Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=66fa3572-5761-47fd-b1e3-168bd77210ba name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:16:33 addons-054604 crio[831]: time="2025-12-13T10:16:33.86216144Z" level=info msg="Creating container: default/hello-world-app-5d498dc89-zntvf/hello-world-app" id=97eb63d4-7b8a-473a-9985-3b566cc7eaa4 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 10:16:33 addons-054604 crio[831]: time="2025-12-13T10:16:33.862277044Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 10:16:33 addons-054604 crio[831]: time="2025-12-13T10:16:33.881007421Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 10:16:33 addons-054604 crio[831]: time="2025-12-13T10:16:33.881267223Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/8064ed043d7b907b864da48aa7a3bba674640d2f37860940692e97983c1c9c16/merged/etc/passwd: no such file or directory"
	Dec 13 10:16:33 addons-054604 crio[831]: time="2025-12-13T10:16:33.881294465Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/8064ed043d7b907b864da48aa7a3bba674640d2f37860940692e97983c1c9c16/merged/etc/group: no such file or directory"
	Dec 13 10:16:33 addons-054604 crio[831]: time="2025-12-13T10:16:33.883123417Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 10:16:33 addons-054604 crio[831]: time="2025-12-13T10:16:33.922422373Z" level=info msg="Created container 3a0085c82e0174ec419a176ddb50d54cdbb29e674808f392c52b035980a5a144: default/hello-world-app-5d498dc89-zntvf/hello-world-app" id=97eb63d4-7b8a-473a-9985-3b566cc7eaa4 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 10:16:33 addons-054604 crio[831]: time="2025-12-13T10:16:33.925209195Z" level=info msg="Starting container: 3a0085c82e0174ec419a176ddb50d54cdbb29e674808f392c52b035980a5a144" id=ddfc945e-5b89-4f7f-91cb-97155b3c464e name=/runtime.v1.RuntimeService/StartContainer
	Dec 13 10:16:33 addons-054604 crio[831]: time="2025-12-13T10:16:33.934212583Z" level=info msg="Started container" PID=6941 containerID=3a0085c82e0174ec419a176ddb50d54cdbb29e674808f392c52b035980a5a144 description=default/hello-world-app-5d498dc89-zntvf/hello-world-app id=ddfc945e-5b89-4f7f-91cb-97155b3c464e name=/runtime.v1.RuntimeService/StartContainer sandboxID=3a68a0afca9d7014a52b489c0d9f2cecff81bcf86237814af56c054921de6360
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED                  STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	3a0085c82e017       docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b                                        Less than a second ago   Running             hello-world-app                          0                   3a68a0afca9d7       hello-world-app-5d498dc89-zntvf             default
	db58c086916d9       public.ecr.aws/nginx/nginx@sha256:2faa7e87b6fbce823070978247970cea2ad90b1936e84eeae1bd2680b03c168d                                           2 minutes ago            Running             nginx                                    0                   e263c6aed5f66       nginx                                       default
	2331841998ff0       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          2 minutes ago            Running             busybox                                  0                   4616a2e4f9ac8       busybox                                     default
	9dfc412275d47       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          3 minutes ago            Running             csi-snapshotter                          0                   34035a86620df       csi-hostpathplugin-8fv49                    kube-system
	411736ab35d31       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          3 minutes ago            Running             csi-provisioner                          0                   34035a86620df       csi-hostpathplugin-8fv49                    kube-system
	fe7aa6350e217       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            3 minutes ago            Running             liveness-probe                           0                   34035a86620df       csi-hostpathplugin-8fv49                    kube-system
	8a3729518104c       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           3 minutes ago            Running             hostpath                                 0                   34035a86620df       csi-hostpathplugin-8fv49                    kube-system
	10f4327e1d3d1       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                3 minutes ago            Running             node-driver-registrar                    0                   34035a86620df       csi-hostpathplugin-8fv49                    kube-system
	1cde751118248       registry.k8s.io/ingress-nginx/controller@sha256:75494e2145fbebf362d24e24e9285b7fbb7da8783ab272092e3126e24ee4776d                             3 minutes ago            Running             controller                               0                   81b13b71009ee       ingress-nginx-controller-85d4c799dd-7hn6s   ingress-nginx
	f2fc5b929f6b1       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 3 minutes ago            Running             gcp-auth                                 0                   16531eadfd416       gcp-auth-78565c9fb4-lvdkf                   gcp-auth
	e94b6593a2663       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:fadc7bf59b69965b6707edb68022bed4f55a1f99b15f7acd272793e48f171496                            3 minutes ago            Running             gadget                                   0                   88cab76c5cc40       gadget-69sd7                                gadget
	1380fedb08b07       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              3 minutes ago            Running             registry-proxy                           0                   c06308ecafd98       registry-proxy-xclch                        kube-system
	441a32eb57b51       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   3 minutes ago            Running             csi-external-health-monitor-controller   0                   34035a86620df       csi-hostpathplugin-8fv49                    kube-system
	f871736240871       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             3 minutes ago            Running             csi-attacher                             0                   cf5357d3d4197       csi-hostpath-attacher-0                     kube-system
	a62092f409cd8       e8105550077f5c6c8e92536651451107053f0e41635396ee42aef596441c179a                                                                             3 minutes ago            Exited              patch                                    2                   51e01e53770cc       ingress-nginx-admission-patch-484xv         ingress-nginx
	3ded6e57579bd       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      3 minutes ago            Running             volume-snapshot-controller               0                   b467b88894539       snapshot-controller-7d9fbc56b8-8tp2d        kube-system
	b0e2d0e7e16b2       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      3 minutes ago            Running             volume-snapshot-controller               0                   c2c08130fe129       snapshot-controller-7d9fbc56b8-bbmhp        kube-system
	c6400c2f13db8       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             3 minutes ago            Running             local-path-provisioner                   0                   b5fdb9b87cfdd       local-path-provisioner-648f6765c9-74jdr     local-path-storage
	02565e8c756a3       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           3 minutes ago            Running             registry                                 0                   9d8c398c5dcec       registry-6b586f9694-4bxkh                   kube-system
	e7a671c2a2145       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:c9c1ef89e4bb9d6c9c6c0b5375c3253a0b951e5b731240be20cebe5593de142d                   3 minutes ago            Exited              create                                   0                   ba8231160a3d0       ingress-nginx-admission-create-x5kpk        ingress-nginx
	17cb3c6d34002       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               3 minutes ago            Running             minikube-ingress-dns                     0                   e617197090ff2       kube-ingress-dns-minikube                   kube-system
	b9bc680915f66       gcr.io/cloud-spanner-emulator/emulator@sha256:daeab9cb1978e02113045625e2633619f465f22aac7638101995f4cd03607170                               3 minutes ago            Running             cloud-spanner-emulator                   0                   4ca206b74a66d       cloud-spanner-emulator-5bdddb765-q7dcr      default
	26436889f5cc9       nvcr.io/nvidia/k8s-device-plugin@sha256:80924fc52384565a7c59f1e2f12319fb8f2b02a1c974bb3d73a9853fe01af874                                     3 minutes ago            Running             nvidia-device-plugin-ctr                 0                   eb79efb791625       nvidia-device-plugin-daemonset-gzjcp        kube-system
	43da519983e66       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              3 minutes ago            Running             csi-resizer                              0                   bb565e7cb69ea       csi-hostpath-resizer-0                      kube-system
	eceabc21413f6       docker.io/marcnuri/yakd@sha256:0b7e831df7fe4ad1c8c56a736a8d66bd86e243f6777d3c512ead47199d8fbe1a                                              3 minutes ago            Running             yakd                                     0                   90c509eb45619       yakd-dashboard-6654c87f9b-wwf8h             yakd-dashboard
	b95fb046aaf43       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        3 minutes ago            Running             metrics-server                           0                   1f424dd3ab02e       metrics-server-85b7d694d7-2ppdp             kube-system
	6211c2eaceea4       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             3 minutes ago            Running             storage-provisioner                      0                   01a5ac9ef9183       storage-provisioner                         kube-system
	f5883fd88845b       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             3 minutes ago            Running             coredns                                  0                   9e8398db2e6d7       coredns-66bc5c9577-t662h                    kube-system
	ef33020503a2d       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786                                                                             4 minutes ago            Running             kube-proxy                               0                   ea3874e7b458b       kube-proxy-hp7zc                            kube-system
	5add978c4ef16       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             4 minutes ago            Running             kindnet-cni                              0                   91e0693d65d50       kindnet-wx4r9                               kube-system
	dc808fcd2f20c       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949                                                                             4 minutes ago            Running             kube-scheduler                           0                   255eaef63aa04       kube-scheduler-addons-054604                kube-system
	20394cb814363       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42                                                                             4 minutes ago            Running             etcd                                     0                   5d972d122c495       etcd-addons-054604                          kube-system
	e1f2fa7dc8f92       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2                                                                             4 minutes ago            Running             kube-controller-manager                  0                   d01569133b014       kube-controller-manager-addons-054604       kube-system
	3554210f6ef5f       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7                                                                             4 minutes ago            Running             kube-apiserver                           0                   3fc2fbbc4e83e       kube-apiserver-addons-054604                kube-system
	
	
	==> coredns [f5883fd88845b71596a62cc554ff445150ecbdc4f555d4ecde337e35133a26a6] <==
	[INFO] 10.244.0.14:45197 - 54454 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.004608695s
	[INFO] 10.244.0.14:45197 - 37517 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000132777s
	[INFO] 10.244.0.14:45197 - 5526 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000199058s
	[INFO] 10.244.0.14:59467 - 24127 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000148662s
	[INFO] 10.244.0.14:59467 - 23905 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000163965s
	[INFO] 10.244.0.14:42631 - 34311 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000112888s
	[INFO] 10.244.0.14:42631 - 34139 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000157935s
	[INFO] 10.244.0.14:47889 - 18634 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000114931s
	[INFO] 10.244.0.14:47889 - 18181 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000176462s
	[INFO] 10.244.0.14:51722 - 61521 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001295613s
	[INFO] 10.244.0.14:51722 - 61324 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001317735s
	[INFO] 10.244.0.14:44518 - 58636 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000115981s
	[INFO] 10.244.0.14:44518 - 58456 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000166468s
	[INFO] 10.244.0.21:58426 - 44169 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000210439s
	[INFO] 10.244.0.21:59614 - 31790 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00014945s
	[INFO] 10.244.0.21:33206 - 60804 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000124145s
	[INFO] 10.244.0.21:37664 - 53028 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000130487s
	[INFO] 10.244.0.21:35513 - 12061 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000128781s
	[INFO] 10.244.0.21:42379 - 23024 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000131054s
	[INFO] 10.244.0.21:57259 - 60829 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.004006554s
	[INFO] 10.244.0.21:34817 - 57279 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.003528534s
	[INFO] 10.244.0.21:52894 - 56547 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.004683584s
	[INFO] 10.244.0.21:41134 - 8800 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.006230342s
	[INFO] 10.244.0.23:45208 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000168634s
	[INFO] 10.244.0.23:42217 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000091357s
	
	
	==> describe nodes <==
	Name:               addons-054604
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-054604
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fb16b7642350f383695d44d1e88d7327f6f14453
	                    minikube.k8s.io/name=addons-054604
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_13T10_11_48_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-054604
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-054604"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 13 Dec 2025 10:11:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-054604
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 13 Dec 2025 10:16:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 13 Dec 2025 10:16:03 +0000   Sat, 13 Dec 2025 10:11:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 13 Dec 2025 10:16:03 +0000   Sat, 13 Dec 2025 10:11:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 13 Dec 2025 10:16:03 +0000   Sat, 13 Dec 2025 10:11:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 13 Dec 2025 10:16:03 +0000   Sat, 13 Dec 2025 10:12:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-054604
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 78f85184c267cd52312ad0096937f858
	  System UUID:                2949b3e3-1bf6-486b-8e0a-6501682d5a50
	  Boot ID:                    ff73813c-a05d-46ba-ba43-f4a4c3dc42b1
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (28 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m57s
	  default                     cloud-spanner-emulator-5bdddb765-q7dcr       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m38s
	  default                     hello-world-app-5d498dc89-zntvf              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m22s
	  gadget                      gadget-69sd7                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m36s
	  gcp-auth                    gcp-auth-78565c9fb4-lvdkf                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m31s
	  ingress-nginx               ingress-nginx-controller-85d4c799dd-7hn6s    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         4m35s
	  kube-system                 coredns-66bc5c9577-t662h                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     4m41s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m35s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m35s
	  kube-system                 csi-hostpathplugin-8fv49                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m
	  kube-system                 etcd-addons-054604                           100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         4m46s
	  kube-system                 kindnet-wx4r9                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      4m42s
	  kube-system                 kube-apiserver-addons-054604                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m48s
	  kube-system                 kube-controller-manager-addons-054604        200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m46s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m37s
	  kube-system                 kube-proxy-hp7zc                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m42s
	  kube-system                 kube-scheduler-addons-054604                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m46s
	  kube-system                 metrics-server-85b7d694d7-2ppdp              100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         4m36s
	  kube-system                 nvidia-device-plugin-daemonset-gzjcp         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m
	  kube-system                 registry-6b586f9694-4bxkh                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m37s
	  kube-system                 registry-creds-764b6fb674-2htf4              0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m39s
	  kube-system                 registry-proxy-xclch                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m
	  kube-system                 snapshot-controller-7d9fbc56b8-8tp2d         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m35s
	  kube-system                 snapshot-controller-7d9fbc56b8-bbmhp         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m35s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m37s
	  local-path-storage          local-path-provisioner-648f6765c9-74jdr      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m36s
	  yakd-dashboard              yakd-dashboard-6654c87f9b-wwf8h              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     4m36s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 4m40s  kube-proxy       
	  Normal   Starting                 4m47s  kubelet          Starting kubelet.
	  Warning  CgroupV1                 4m47s  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  4m46s  kubelet          Node addons-054604 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m46s  kubelet          Node addons-054604 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m46s  kubelet          Node addons-054604 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4m42s  node-controller  Node addons-054604 event: Registered Node addons-054604 in Controller
	  Normal   NodeReady                4m     kubelet          Node addons-054604 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec13 08:10] kauditd_printk_skb: 8 callbacks suppressed
	[Dec13 10:10] kauditd_printk_skb: 8 callbacks suppressed
	[Dec13 10:11] overlayfs: idmapped layers are currently not supported
	[  +0.076161] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [20394cb8143630b89075746bcaf2fcc0ab2ad362bbfcfdd47a2cd53854bf8283] <==
	{"level":"warn","ts":"2025-12-13T10:11:44.038255Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44970","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:11:44.051737Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:11:44.090745Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:11:44.101323Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:11:44.136333Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45026","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:11:44.143921Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:11:44.159439Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:11:44.174569Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45092","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:11:44.194326Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:11:44.206222Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:11:44.222348Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45138","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:11:44.238984Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45160","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:11:44.253738Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:11:44.268282Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45196","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:11:44.290002Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45218","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:11:44.322224Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:11:44.333722Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45238","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:11:44.348767Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45266","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:11:44.421159Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:12:00.654379Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:12:00.718180Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56230","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:12:22.261085Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:12:22.277741Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:12:22.320191Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45090","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:12:22.335506Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45114","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [f2fc5b929f6b1fffd663ee10be5c61705ab1807c9f8199e01f38faae41bd3143] <==
	2025/12/13 10:13:20 GCP Auth Webhook started!
	2025/12/13 10:13:37 Ready to marshal response ...
	2025/12/13 10:13:37 Ready to write response ...
	2025/12/13 10:13:37 Ready to marshal response ...
	2025/12/13 10:13:37 Ready to write response ...
	2025/12/13 10:13:37 Ready to marshal response ...
	2025/12/13 10:13:37 Ready to write response ...
	2025/12/13 10:13:56 Ready to marshal response ...
	2025/12/13 10:13:56 Ready to write response ...
	2025/12/13 10:14:12 Ready to marshal response ...
	2025/12/13 10:14:12 Ready to write response ...
	2025/12/13 10:14:13 Ready to marshal response ...
	2025/12/13 10:14:13 Ready to write response ...
	2025/12/13 10:14:36 Ready to marshal response ...
	2025/12/13 10:14:36 Ready to write response ...
	2025/12/13 10:14:56 Ready to marshal response ...
	2025/12/13 10:14:56 Ready to write response ...
	2025/12/13 10:14:56 Ready to marshal response ...
	2025/12/13 10:14:56 Ready to write response ...
	2025/12/13 10:15:04 Ready to marshal response ...
	2025/12/13 10:15:04 Ready to write response ...
	2025/12/13 10:16:32 Ready to marshal response ...
	2025/12/13 10:16:32 Ready to write response ...
	
	
	==> kernel <==
	 10:16:34 up  4:59,  0 user,  load average: 0.71, 1.86, 1.92
	Linux addons-054604 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [5add978c4ef1694390a3d23a377353da04049787988a6975f63db25d97f83d26] <==
	I1213 10:14:33.932394       1 main.go:301] handling current node
	I1213 10:14:43.926719       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 10:14:43.926763       1 main.go:301] handling current node
	I1213 10:14:53.934700       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 10:14:53.934738       1 main.go:301] handling current node
	I1213 10:15:03.926819       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 10:15:03.926855       1 main.go:301] handling current node
	I1213 10:15:13.926132       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 10:15:13.926172       1 main.go:301] handling current node
	I1213 10:15:23.926192       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 10:15:23.926306       1 main.go:301] handling current node
	I1213 10:15:33.932630       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 10:15:33.932669       1 main.go:301] handling current node
	I1213 10:15:43.935659       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 10:15:43.935694       1 main.go:301] handling current node
	I1213 10:15:53.926960       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 10:15:53.927066       1 main.go:301] handling current node
	I1213 10:16:03.926831       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 10:16:03.926873       1 main.go:301] handling current node
	I1213 10:16:13.935329       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 10:16:13.935365       1 main.go:301] handling current node
	I1213 10:16:23.935027       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 10:16:23.935063       1 main.go:301] handling current node
	I1213 10:16:33.926417       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 10:16:33.926449       1 main.go:301] handling current node
	
	
	==> kube-apiserver [3554210f6ef5f452792fd9b76f594ebd610b0877229d8a31c3107d175d62b9d0] <==
	E1213 10:12:40.144604       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.145.86:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.108.145.86:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.108.145.86:443: connect: connection refused" logger="UnhandledError"
	W1213 10:12:41.145036       1 handler_proxy.go:99] no RequestInfo found in the context
	E1213 10:12:41.145080       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1213 10:12:41.145094       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1213 10:12:41.145152       1 handler_proxy.go:99] no RequestInfo found in the context
	E1213 10:12:41.145213       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1213 10:12:41.146319       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1213 10:12:45.163096       1 handler_proxy.go:99] no RequestInfo found in the context
	E1213 10:12:45.163160       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1213 10:12:45.163749       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.145.86:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.108.145.86:443/apis/metrics.k8s.io/v1beta1\": context deadline exceeded" logger="UnhandledError"
	I1213 10:12:45.241021       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1213 10:13:45.981399       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:49846: use of closed network connection
	E1213 10:13:46.228049       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:49876: use of closed network connection
	E1213 10:13:46.362409       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:49896: use of closed network connection
	I1213 10:14:12.262641       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1213 10:14:12.564274       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.98.60.189"}
	I1213 10:14:22.498879       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E1213 10:14:24.842779       1 watch.go:272] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	I1213 10:16:32.958601       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.96.6.116"}
	
	
	==> kube-controller-manager [e1f2fa7dc8f92abbea7fb441095c9aeec308a85e7b5d309ca12d373510309517] <==
	I1213 10:11:52.288551       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="addons-054604"
	I1213 10:11:52.288660       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1213 10:11:52.289467       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1213 10:11:52.289554       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1213 10:11:52.289586       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1213 10:11:52.290070       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1213 10:11:52.290889       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1213 10:11:52.290906       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1213 10:11:52.290960       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1213 10:11:52.291104       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1213 10:11:52.291113       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1213 10:11:52.291127       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1213 10:11:52.292645       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1213 10:11:52.292668       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1213 10:11:52.296408       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1213 10:11:52.299583       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1213 10:11:58.324104       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1213 10:12:22.253604       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1213 10:12:22.253756       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1213 10:12:22.253834       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1213 10:12:22.308165       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1213 10:12:22.312573       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1213 10:12:22.354423       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1213 10:12:22.413443       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1213 10:12:37.295229       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [ef33020503a2d05204007d80967d03b004d2f713bb9d624b96f03468c0ea093d] <==
	I1213 10:11:54.382421       1 server_linux.go:53] "Using iptables proxy"
	I1213 10:11:54.528184       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1213 10:11:54.629042       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1213 10:11:54.629114       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1213 10:11:54.629197       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1213 10:11:54.736448       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1213 10:11:54.736504       1 server_linux.go:132] "Using iptables Proxier"
	I1213 10:11:54.751339       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1213 10:11:54.751641       1 server.go:527] "Version info" version="v1.34.2"
	I1213 10:11:54.751655       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 10:11:54.763339       1 config.go:200] "Starting service config controller"
	I1213 10:11:54.763359       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1213 10:11:54.763374       1 config.go:106] "Starting endpoint slice config controller"
	I1213 10:11:54.763378       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1213 10:11:54.763385       1 config.go:403] "Starting serviceCIDR config controller"
	I1213 10:11:54.763389       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1213 10:11:54.764341       1 config.go:309] "Starting node config controller"
	I1213 10:11:54.764350       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1213 10:11:54.764356       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1213 10:11:54.864159       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1213 10:11:54.864203       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1213 10:11:54.864216       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [dc808fcd2f20cbb36aefb288cc12021843e1d6fb5c3826f37451c82b9ec46a14] <==
	E1213 10:11:45.284658       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1213 10:11:45.284727       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1213 10:11:45.289124       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1213 10:11:45.303501       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1213 10:11:45.303629       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1213 10:11:45.303699       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1213 10:11:45.303805       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1213 10:11:45.303879       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1213 10:11:45.303939       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1213 10:11:45.303999       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1213 10:11:45.304054       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1213 10:11:45.304112       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1213 10:11:45.304157       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1213 10:11:45.304204       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1213 10:11:45.304252       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1213 10:11:45.304407       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1213 10:11:45.304472       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1213 10:11:46.132634       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1213 10:11:46.197458       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1213 10:11:46.212219       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1213 10:11:46.227654       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1213 10:11:46.268419       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1213 10:11:46.286953       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1213 10:11:46.296663       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	I1213 10:11:46.866703       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 13 10:15:06 addons-054604 kubelet[1279]: I1213 10:15:06.136167    1279 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-psnrm\" (UniqueName: \"kubernetes.io/projected/a0b24d1b-5a96-4a03-a0a1-780b9382a569-kube-api-access-psnrm\") pod \"a0b24d1b-5a96-4a03-a0a1-780b9382a569\" (UID: \"a0b24d1b-5a96-4a03-a0a1-780b9382a569\") "
	Dec 13 10:15:06 addons-054604 kubelet[1279]: I1213 10:15:06.136239    1279 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/a0b24d1b-5a96-4a03-a0a1-780b9382a569-gcp-creds\") pod \"a0b24d1b-5a96-4a03-a0a1-780b9382a569\" (UID: \"a0b24d1b-5a96-4a03-a0a1-780b9382a569\") "
	Dec 13 10:15:06 addons-054604 kubelet[1279]: I1213 10:15:06.136275    1279 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/a0b24d1b-5a96-4a03-a0a1-780b9382a569-script\") pod \"a0b24d1b-5a96-4a03-a0a1-780b9382a569\" (UID: \"a0b24d1b-5a96-4a03-a0a1-780b9382a569\") "
	Dec 13 10:15:06 addons-054604 kubelet[1279]: I1213 10:15:06.136300    1279 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/a0b24d1b-5a96-4a03-a0a1-780b9382a569-data\") pod \"a0b24d1b-5a96-4a03-a0a1-780b9382a569\" (UID: \"a0b24d1b-5a96-4a03-a0a1-780b9382a569\") "
	Dec 13 10:15:06 addons-054604 kubelet[1279]: I1213 10:15:06.136462    1279 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a0b24d1b-5a96-4a03-a0a1-780b9382a569-data" (OuterVolumeSpecName: "data") pod "a0b24d1b-5a96-4a03-a0a1-780b9382a569" (UID: "a0b24d1b-5a96-4a03-a0a1-780b9382a569"). InnerVolumeSpecName "data". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Dec 13 10:15:06 addons-054604 kubelet[1279]: I1213 10:15:06.136923    1279 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a0b24d1b-5a96-4a03-a0a1-780b9382a569-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "a0b24d1b-5a96-4a03-a0a1-780b9382a569" (UID: "a0b24d1b-5a96-4a03-a0a1-780b9382a569"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Dec 13 10:15:06 addons-054604 kubelet[1279]: I1213 10:15:06.137228    1279 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0b24d1b-5a96-4a03-a0a1-780b9382a569-script" (OuterVolumeSpecName: "script") pod "a0b24d1b-5a96-4a03-a0a1-780b9382a569" (UID: "a0b24d1b-5a96-4a03-a0a1-780b9382a569"). InnerVolumeSpecName "script". PluginName "kubernetes.io/configmap", VolumeGIDValue ""
	Dec 13 10:15:06 addons-054604 kubelet[1279]: I1213 10:15:06.142198    1279 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0b24d1b-5a96-4a03-a0a1-780b9382a569-kube-api-access-psnrm" (OuterVolumeSpecName: "kube-api-access-psnrm") pod "a0b24d1b-5a96-4a03-a0a1-780b9382a569" (UID: "a0b24d1b-5a96-4a03-a0a1-780b9382a569"). InnerVolumeSpecName "kube-api-access-psnrm". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Dec 13 10:15:06 addons-054604 kubelet[1279]: I1213 10:15:06.237625    1279 reconciler_common.go:299] "Volume detached for volume \"script\" (UniqueName: \"kubernetes.io/configmap/a0b24d1b-5a96-4a03-a0a1-780b9382a569-script\") on node \"addons-054604\" DevicePath \"\""
	Dec 13 10:15:06 addons-054604 kubelet[1279]: I1213 10:15:06.237668    1279 reconciler_common.go:299] "Volume detached for volume \"data\" (UniqueName: \"kubernetes.io/host-path/a0b24d1b-5a96-4a03-a0a1-780b9382a569-data\") on node \"addons-054604\" DevicePath \"\""
	Dec 13 10:15:06 addons-054604 kubelet[1279]: I1213 10:15:06.237681    1279 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-psnrm\" (UniqueName: \"kubernetes.io/projected/a0b24d1b-5a96-4a03-a0a1-780b9382a569-kube-api-access-psnrm\") on node \"addons-054604\" DevicePath \"\""
	Dec 13 10:15:06 addons-054604 kubelet[1279]: I1213 10:15:06.237696    1279 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/a0b24d1b-5a96-4a03-a0a1-780b9382a569-gcp-creds\") on node \"addons-054604\" DevicePath \"\""
	Dec 13 10:15:06 addons-054604 kubelet[1279]: I1213 10:15:06.981032    1279 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="68db50b6606be03f63bdff7858b1ddcbb11853149a8ba339af6716cf25b00c8f"
	Dec 13 10:15:06 addons-054604 kubelet[1279]: E1213 10:15:06.982847    1279 status_manager.go:1018] "Failed to get status for pod" err="pods \"helper-pod-delete-pvc-39853fcc-b135-458e-957a-4cf093e2ffac\" is forbidden: User \"system:node:addons-054604\" cannot get resource \"pods\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-054604' and this object" podUID="a0b24d1b-5a96-4a03-a0a1-780b9382a569" pod="local-path-storage/helper-pod-delete-pvc-39853fcc-b135-458e-957a-4cf093e2ffac"
	Dec 13 10:15:07 addons-054604 kubelet[1279]: E1213 10:15:07.795497    1279 status_manager.go:1018] "Failed to get status for pod" err="pods \"helper-pod-delete-pvc-39853fcc-b135-458e-957a-4cf093e2ffac\" is forbidden: User \"system:node:addons-054604\" cannot get resource \"pods\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-054604' and this object" podUID="a0b24d1b-5a96-4a03-a0a1-780b9382a569" pod="local-path-storage/helper-pod-delete-pvc-39853fcc-b135-458e-957a-4cf093e2ffac"
	Dec 13 10:15:07 addons-054604 kubelet[1279]: I1213 10:15:07.796671    1279 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0b24d1b-5a96-4a03-a0a1-780b9382a569" path="/var/lib/kubelet/pods/a0b24d1b-5a96-4a03-a0a1-780b9382a569/volumes"
	Dec 13 10:15:24 addons-054604 kubelet[1279]: I1213 10:15:24.793434    1279 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-gzjcp" secret="" err="secret \"gcp-auth\" not found"
	Dec 13 10:15:40 addons-054604 kubelet[1279]: I1213 10:15:40.793370    1279 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6b586f9694-4bxkh" secret="" err="secret \"gcp-auth\" not found"
	Dec 13 10:15:47 addons-054604 kubelet[1279]: I1213 10:15:47.942824    1279 scope.go:117] "RemoveContainer" containerID="0a76f94bb75f8d9bd8f7319c7f9ea14ab46e97152e8c399ef2d2491e6bae3e41"
	Dec 13 10:15:47 addons-054604 kubelet[1279]: I1213 10:15:47.953729    1279 scope.go:117] "RemoveContainer" containerID="f66408f0f36a6ae8bcaa2b729de66fdca4dd5dc7a8e457bc01d8c0faed154c4e"
	Dec 13 10:15:56 addons-054604 kubelet[1279]: I1213 10:15:56.793283    1279 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-xclch" secret="" err="secret \"gcp-auth\" not found"
	Dec 13 10:16:28 addons-054604 kubelet[1279]: I1213 10:16:28.794122    1279 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-gzjcp" secret="" err="secret \"gcp-auth\" not found"
	Dec 13 10:16:32 addons-054604 kubelet[1279]: I1213 10:16:32.867573    1279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/73bff38a-d01f-4903-8b6c-bd8c7497030b-gcp-creds\") pod \"hello-world-app-5d498dc89-zntvf\" (UID: \"73bff38a-d01f-4903-8b6c-bd8c7497030b\") " pod="default/hello-world-app-5d498dc89-zntvf"
	Dec 13 10:16:32 addons-054604 kubelet[1279]: I1213 10:16:32.868122    1279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jt99\" (UniqueName: \"kubernetes.io/projected/73bff38a-d01f-4903-8b6c-bd8c7497030b-kube-api-access-6jt99\") pod \"hello-world-app-5d498dc89-zntvf\" (UID: \"73bff38a-d01f-4903-8b6c-bd8c7497030b\") " pod="default/hello-world-app-5d498dc89-zntvf"
	Dec 13 10:16:33 addons-054604 kubelet[1279]: W1213 10:16:33.138352    1279 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/218e0b6bff8529458d50df21ae3b67480ee0457432734dc8a39716faf5b2e157/crio-3a68a0afca9d7014a52b489c0d9f2cecff81bcf86237814af56c054921de6360 WatchSource:0}: Error finding container 3a68a0afca9d7014a52b489c0d9f2cecff81bcf86237814af56c054921de6360: Status 404 returned error can't find the container with id 3a68a0afca9d7014a52b489c0d9f2cecff81bcf86237814af56c054921de6360
	
	
	==> storage-provisioner [6211c2eaceea48cd7564d6e61228c6caae19ee3c9becf5796dfe85344142c6f9] <==
	W1213 10:16:10.868424       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 10:16:12.871236       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 10:16:12.875806       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 10:16:14.878997       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 10:16:14.883778       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 10:16:16.887748       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 10:16:16.894433       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 10:16:18.897108       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 10:16:18.901973       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 10:16:20.905626       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 10:16:20.912434       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 10:16:22.915502       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 10:16:22.922095       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 10:16:24.925276       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 10:16:24.930092       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 10:16:26.933971       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 10:16:26.938401       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 10:16:28.942187       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 10:16:28.949020       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 10:16:30.953191       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 10:16:30.958624       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 10:16:32.965330       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 10:16:33.023749       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 10:16:35.028525       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 10:16:35.033838       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-054604 -n addons-054604
helpers_test.go:270: (dbg) Run:  kubectl --context addons-054604 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: ingress-nginx-admission-create-x5kpk ingress-nginx-admission-patch-484xv registry-creds-764b6fb674-2htf4
helpers_test.go:283: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context addons-054604 describe pod ingress-nginx-admission-create-x5kpk ingress-nginx-admission-patch-484xv registry-creds-764b6fb674-2htf4
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context addons-054604 describe pod ingress-nginx-admission-create-x5kpk ingress-nginx-admission-patch-484xv registry-creds-764b6fb674-2htf4: exit status 1 (80.477768ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-x5kpk" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-484xv" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-2htf4" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context addons-054604 describe pod ingress-nginx-admission-create-x5kpk ingress-nginx-admission-patch-484xv registry-creds-764b6fb674-2htf4: exit status 1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-054604 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-054604 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (305.327543ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 10:16:35.976161  917854 out.go:360] Setting OutFile to fd 1 ...
	I1213 10:16:35.976981  917854 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:16:35.976995  917854 out.go:374] Setting ErrFile to fd 2...
	I1213 10:16:35.977001  917854 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:16:35.977262  917854 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-904040/.minikube/bin
	I1213 10:16:35.977584  917854 mustload.go:66] Loading cluster: addons-054604
	I1213 10:16:35.977950  917854 config.go:182] Loaded profile config "addons-054604": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 10:16:35.977967  917854 addons.go:622] checking whether the cluster is paused
	I1213 10:16:35.978076  917854 config.go:182] Loaded profile config "addons-054604": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 10:16:35.978092  917854 host.go:66] Checking if "addons-054604" exists ...
	I1213 10:16:35.978608  917854 cli_runner.go:164] Run: docker container inspect addons-054604 --format={{.State.Status}}
	I1213 10:16:36.003214  917854 ssh_runner.go:195] Run: systemctl --version
	I1213 10:16:36.003275  917854 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054604
	I1213 10:16:36.026573  917854 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/addons-054604/id_rsa Username:docker}
	I1213 10:16:36.133334  917854 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 10:16:36.133442  917854 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 10:16:36.178451  917854 cri.go:89] found id: "9dfc412275d47430708bdd9da315ac44e2974752210e0b6c277cd82d7ab105d8"
	I1213 10:16:36.178471  917854 cri.go:89] found id: "411736ab35d3122ee9b77c4fc64fcbda3f988a45ef73863ff00fa52c4fcbb5c7"
	I1213 10:16:36.178476  917854 cri.go:89] found id: "fe7aa6350e217489746a35073aa2da5782e8ada1b47068f824336572cc33c246"
	I1213 10:16:36.178480  917854 cri.go:89] found id: "8a3729518104ccd9e11876774115da030c622637d97b6a762aa635c931085794"
	I1213 10:16:36.178484  917854 cri.go:89] found id: "10f4327e1d3d1b22a00362557e4a294a9b51185d58b7fe190a26ecec8dee2672"
	I1213 10:16:36.178491  917854 cri.go:89] found id: "1380fedb08b07c143b7e5940b5fea400bb730d0da239b5791d19f9a08901e231"
	I1213 10:16:36.178495  917854 cri.go:89] found id: "441a32eb57b513f29176ca9f6dd18328104a9b5fa79ee380cd08a41f7978ec90"
	I1213 10:16:36.178499  917854 cri.go:89] found id: "f871736240871de0f1ef464a002684e2ece515c0dfa8fd5f8d5b13b4e565c68e"
	I1213 10:16:36.178502  917854 cri.go:89] found id: "3ded6e57579bd0a8c2ad26ac6e93cbdb9c7b06cd00dbc61e1e85f832e73f085f"
	I1213 10:16:36.178517  917854 cri.go:89] found id: "b0e2d0e7e16b279110b53187dea2355419b23a459b2cf25f96d88c2db0f68d2b"
	I1213 10:16:36.178521  917854 cri.go:89] found id: "02565e8c756a3812567eee3588c54df43097a3c2581ba9063db5d5e26597a5cc"
	I1213 10:16:36.178524  917854 cri.go:89] found id: "17cb3c6d340024b2539323934bdce363102d990353d8c21c5c48b7842be6369c"
	I1213 10:16:36.178527  917854 cri.go:89] found id: "26436889f5cc97c312f42a74136b21c2c06338b4dfc8ef04436984ecf52e0137"
	I1213 10:16:36.178530  917854 cri.go:89] found id: "43da519983e66338f032f8e084a64f058e11aee1e710391af977a7cac3c7a851"
	I1213 10:16:36.178533  917854 cri.go:89] found id: "b95fb046aaf43fa10b6d2e5c93912f378f6234809f2725f67302ba08933bf075"
	I1213 10:16:36.178541  917854 cri.go:89] found id: "6211c2eaceea48cd7564d6e61228c6caae19ee3c9becf5796dfe85344142c6f9"
	I1213 10:16:36.178545  917854 cri.go:89] found id: "f5883fd88845b71596a62cc554ff445150ecbdc4f555d4ecde337e35133a26a6"
	I1213 10:16:36.178549  917854 cri.go:89] found id: "ef33020503a2d05204007d80967d03b004d2f713bb9d624b96f03468c0ea093d"
	I1213 10:16:36.178553  917854 cri.go:89] found id: "5add978c4ef1694390a3d23a377353da04049787988a6975f63db25d97f83d26"
	I1213 10:16:36.178556  917854 cri.go:89] found id: "dc808fcd2f20cbb36aefb288cc12021843e1d6fb5c3826f37451c82b9ec46a14"
	I1213 10:16:36.178561  917854 cri.go:89] found id: "20394cb8143630b89075746bcaf2fcc0ab2ad362bbfcfdd47a2cd53854bf8283"
	I1213 10:16:36.178564  917854 cri.go:89] found id: "e1f2fa7dc8f92abbea7fb441095c9aeec308a85e7b5d309ca12d373510309517"
	I1213 10:16:36.178567  917854 cri.go:89] found id: "3554210f6ef5f452792fd9b76f594ebd610b0877229d8a31c3107d175d62b9d0"
	I1213 10:16:36.178570  917854 cri.go:89] found id: ""
	I1213 10:16:36.178622  917854 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 10:16:36.201964  917854 out.go:203] 
	W1213 10:16:36.204987  917854 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T10:16:36Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T10:16:36Z" level=error msg="open /run/runc: no such file or directory"
	
	W1213 10:16:36.205012  917854 out.go:285] * 
	* 
	W1213 10:16:36.212663  917854 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 10:16:36.219600  917854 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable ingress-dns addon: args "out/minikube-linux-arm64 -p addons-054604 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-054604 addons disable ingress --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-054604 addons disable ingress --alsologtostderr -v=1: exit status 11 (279.86825ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 10:16:36.279749  917967 out.go:360] Setting OutFile to fd 1 ...
	I1213 10:16:36.280517  917967 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:16:36.280552  917967 out.go:374] Setting ErrFile to fd 2...
	I1213 10:16:36.280572  917967 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:16:36.280865  917967 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-904040/.minikube/bin
	I1213 10:16:36.281177  917967 mustload.go:66] Loading cluster: addons-054604
	I1213 10:16:36.281603  917967 config.go:182] Loaded profile config "addons-054604": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 10:16:36.281639  917967 addons.go:622] checking whether the cluster is paused
	I1213 10:16:36.281772  917967 config.go:182] Loaded profile config "addons-054604": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 10:16:36.281797  917967 host.go:66] Checking if "addons-054604" exists ...
	I1213 10:16:36.282336  917967 cli_runner.go:164] Run: docker container inspect addons-054604 --format={{.State.Status}}
	I1213 10:16:36.299465  917967 ssh_runner.go:195] Run: systemctl --version
	I1213 10:16:36.299519  917967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054604
	I1213 10:16:36.331388  917967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/addons-054604/id_rsa Username:docker}
	I1213 10:16:36.440378  917967 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 10:16:36.440471  917967 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 10:16:36.473851  917967 cri.go:89] found id: "9dfc412275d47430708bdd9da315ac44e2974752210e0b6c277cd82d7ab105d8"
	I1213 10:16:36.473883  917967 cri.go:89] found id: "411736ab35d3122ee9b77c4fc64fcbda3f988a45ef73863ff00fa52c4fcbb5c7"
	I1213 10:16:36.473889  917967 cri.go:89] found id: "fe7aa6350e217489746a35073aa2da5782e8ada1b47068f824336572cc33c246"
	I1213 10:16:36.473894  917967 cri.go:89] found id: "8a3729518104ccd9e11876774115da030c622637d97b6a762aa635c931085794"
	I1213 10:16:36.473897  917967 cri.go:89] found id: "10f4327e1d3d1b22a00362557e4a294a9b51185d58b7fe190a26ecec8dee2672"
	I1213 10:16:36.473901  917967 cri.go:89] found id: "1380fedb08b07c143b7e5940b5fea400bb730d0da239b5791d19f9a08901e231"
	I1213 10:16:36.473904  917967 cri.go:89] found id: "441a32eb57b513f29176ca9f6dd18328104a9b5fa79ee380cd08a41f7978ec90"
	I1213 10:16:36.473907  917967 cri.go:89] found id: "f871736240871de0f1ef464a002684e2ece515c0dfa8fd5f8d5b13b4e565c68e"
	I1213 10:16:36.473911  917967 cri.go:89] found id: "3ded6e57579bd0a8c2ad26ac6e93cbdb9c7b06cd00dbc61e1e85f832e73f085f"
	I1213 10:16:36.473917  917967 cri.go:89] found id: "b0e2d0e7e16b279110b53187dea2355419b23a459b2cf25f96d88c2db0f68d2b"
	I1213 10:16:36.473921  917967 cri.go:89] found id: "02565e8c756a3812567eee3588c54df43097a3c2581ba9063db5d5e26597a5cc"
	I1213 10:16:36.473924  917967 cri.go:89] found id: "17cb3c6d340024b2539323934bdce363102d990353d8c21c5c48b7842be6369c"
	I1213 10:16:36.473928  917967 cri.go:89] found id: "26436889f5cc97c312f42a74136b21c2c06338b4dfc8ef04436984ecf52e0137"
	I1213 10:16:36.473932  917967 cri.go:89] found id: "43da519983e66338f032f8e084a64f058e11aee1e710391af977a7cac3c7a851"
	I1213 10:16:36.473936  917967 cri.go:89] found id: "b95fb046aaf43fa10b6d2e5c93912f378f6234809f2725f67302ba08933bf075"
	I1213 10:16:36.473940  917967 cri.go:89] found id: "6211c2eaceea48cd7564d6e61228c6caae19ee3c9becf5796dfe85344142c6f9"
	I1213 10:16:36.473947  917967 cri.go:89] found id: "f5883fd88845b71596a62cc554ff445150ecbdc4f555d4ecde337e35133a26a6"
	I1213 10:16:36.473951  917967 cri.go:89] found id: "ef33020503a2d05204007d80967d03b004d2f713bb9d624b96f03468c0ea093d"
	I1213 10:16:36.473955  917967 cri.go:89] found id: "5add978c4ef1694390a3d23a377353da04049787988a6975f63db25d97f83d26"
	I1213 10:16:36.473958  917967 cri.go:89] found id: "dc808fcd2f20cbb36aefb288cc12021843e1d6fb5c3826f37451c82b9ec46a14"
	I1213 10:16:36.473962  917967 cri.go:89] found id: "20394cb8143630b89075746bcaf2fcc0ab2ad362bbfcfdd47a2cd53854bf8283"
	I1213 10:16:36.473966  917967 cri.go:89] found id: "e1f2fa7dc8f92abbea7fb441095c9aeec308a85e7b5d309ca12d373510309517"
	I1213 10:16:36.473969  917967 cri.go:89] found id: "3554210f6ef5f452792fd9b76f594ebd610b0877229d8a31c3107d175d62b9d0"
	I1213 10:16:36.473973  917967 cri.go:89] found id: ""
	I1213 10:16:36.474025  917967 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 10:16:36.488810  917967 out.go:203] 
	W1213 10:16:36.491908  917967 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T10:16:36Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T10:16:36Z" level=error msg="open /run/runc: no such file or directory"
	
	W1213 10:16:36.491935  917967 out.go:285] * 
	* 
	W1213 10:16:36.499351  917967 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 10:16:36.502332  917967 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable ingress addon: args "out/minikube-linux-arm64 -p addons-054604 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (144.57s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (5.29s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:353: "gadget-69sd7" [b555c663-c9c6-4144-8032-41fb4ed83d6a] Running
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.003695872s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-054604 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-054604 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (286.528635ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 10:14:11.706724  915448 out.go:360] Setting OutFile to fd 1 ...
	I1213 10:14:11.707688  915448 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:14:11.707706  915448 out.go:374] Setting ErrFile to fd 2...
	I1213 10:14:11.707712  915448 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:14:11.708613  915448 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-904040/.minikube/bin
	I1213 10:14:11.709950  915448 mustload.go:66] Loading cluster: addons-054604
	I1213 10:14:11.710672  915448 config.go:182] Loaded profile config "addons-054604": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 10:14:11.710684  915448 addons.go:622] checking whether the cluster is paused
	I1213 10:14:11.710832  915448 config.go:182] Loaded profile config "addons-054604": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 10:14:11.710864  915448 host.go:66] Checking if "addons-054604" exists ...
	I1213 10:14:11.711795  915448 cli_runner.go:164] Run: docker container inspect addons-054604 --format={{.State.Status}}
	I1213 10:14:11.736654  915448 ssh_runner.go:195] Run: systemctl --version
	I1213 10:14:11.736711  915448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054604
	I1213 10:14:11.756324  915448 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/addons-054604/id_rsa Username:docker}
	I1213 10:14:11.864344  915448 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 10:14:11.864432  915448 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 10:14:11.896825  915448 cri.go:89] found id: "9dfc412275d47430708bdd9da315ac44e2974752210e0b6c277cd82d7ab105d8"
	I1213 10:14:11.896852  915448 cri.go:89] found id: "411736ab35d3122ee9b77c4fc64fcbda3f988a45ef73863ff00fa52c4fcbb5c7"
	I1213 10:14:11.896858  915448 cri.go:89] found id: "fe7aa6350e217489746a35073aa2da5782e8ada1b47068f824336572cc33c246"
	I1213 10:14:11.896861  915448 cri.go:89] found id: "8a3729518104ccd9e11876774115da030c622637d97b6a762aa635c931085794"
	I1213 10:14:11.896865  915448 cri.go:89] found id: "10f4327e1d3d1b22a00362557e4a294a9b51185d58b7fe190a26ecec8dee2672"
	I1213 10:14:11.896868  915448 cri.go:89] found id: "1380fedb08b07c143b7e5940b5fea400bb730d0da239b5791d19f9a08901e231"
	I1213 10:14:11.896871  915448 cri.go:89] found id: "441a32eb57b513f29176ca9f6dd18328104a9b5fa79ee380cd08a41f7978ec90"
	I1213 10:14:11.896874  915448 cri.go:89] found id: "f871736240871de0f1ef464a002684e2ece515c0dfa8fd5f8d5b13b4e565c68e"
	I1213 10:14:11.896877  915448 cri.go:89] found id: "3ded6e57579bd0a8c2ad26ac6e93cbdb9c7b06cd00dbc61e1e85f832e73f085f"
	I1213 10:14:11.896883  915448 cri.go:89] found id: "b0e2d0e7e16b279110b53187dea2355419b23a459b2cf25f96d88c2db0f68d2b"
	I1213 10:14:11.896887  915448 cri.go:89] found id: "02565e8c756a3812567eee3588c54df43097a3c2581ba9063db5d5e26597a5cc"
	I1213 10:14:11.896890  915448 cri.go:89] found id: "17cb3c6d340024b2539323934bdce363102d990353d8c21c5c48b7842be6369c"
	I1213 10:14:11.896893  915448 cri.go:89] found id: "26436889f5cc97c312f42a74136b21c2c06338b4dfc8ef04436984ecf52e0137"
	I1213 10:14:11.896896  915448 cri.go:89] found id: "43da519983e66338f032f8e084a64f058e11aee1e710391af977a7cac3c7a851"
	I1213 10:14:11.896900  915448 cri.go:89] found id: "b95fb046aaf43fa10b6d2e5c93912f378f6234809f2725f67302ba08933bf075"
	I1213 10:14:11.896911  915448 cri.go:89] found id: "6211c2eaceea48cd7564d6e61228c6caae19ee3c9becf5796dfe85344142c6f9"
	I1213 10:14:11.896914  915448 cri.go:89] found id: "f5883fd88845b71596a62cc554ff445150ecbdc4f555d4ecde337e35133a26a6"
	I1213 10:14:11.896919  915448 cri.go:89] found id: "ef33020503a2d05204007d80967d03b004d2f713bb9d624b96f03468c0ea093d"
	I1213 10:14:11.896922  915448 cri.go:89] found id: "5add978c4ef1694390a3d23a377353da04049787988a6975f63db25d97f83d26"
	I1213 10:14:11.896925  915448 cri.go:89] found id: "dc808fcd2f20cbb36aefb288cc12021843e1d6fb5c3826f37451c82b9ec46a14"
	I1213 10:14:11.896935  915448 cri.go:89] found id: "20394cb8143630b89075746bcaf2fcc0ab2ad362bbfcfdd47a2cd53854bf8283"
	I1213 10:14:11.896938  915448 cri.go:89] found id: "e1f2fa7dc8f92abbea7fb441095c9aeec308a85e7b5d309ca12d373510309517"
	I1213 10:14:11.896942  915448 cri.go:89] found id: "3554210f6ef5f452792fd9b76f594ebd610b0877229d8a31c3107d175d62b9d0"
	I1213 10:14:11.896945  915448 cri.go:89] found id: ""
	I1213 10:14:11.897000  915448 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 10:14:11.912763  915448 out.go:203] 
	W1213 10:14:11.915676  915448 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T10:14:11Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T10:14:11Z" level=error msg="open /run/runc: no such file or directory"
	
	W1213 10:14:11.915697  915448 out.go:285] * 
	* 
	W1213 10:14:11.922916  915448 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 10:14:11.925846  915448 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable inspektor-gadget addon: args "out/minikube-linux-arm64 -p addons-054604 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (5.29s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.39s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:457: metrics-server stabilized in 4.673434ms
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:353: "metrics-server-85b7d694d7-2ppdp" [55d8b817-f36a-4527-b64b-aabcc328810b] Running
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004368769s
addons_test.go:465: (dbg) Run:  kubectl --context addons-054604 top pods -n kube-system
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-054604 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-054604 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (283.113375ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 10:14:06.409254  915365 out.go:360] Setting OutFile to fd 1 ...
	I1213 10:14:06.410024  915365 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:14:06.410064  915365 out.go:374] Setting ErrFile to fd 2...
	I1213 10:14:06.410087  915365 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:14:06.410396  915365 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-904040/.minikube/bin
	I1213 10:14:06.410736  915365 mustload.go:66] Loading cluster: addons-054604
	I1213 10:14:06.411176  915365 config.go:182] Loaded profile config "addons-054604": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 10:14:06.411222  915365 addons.go:622] checking whether the cluster is paused
	I1213 10:14:06.411358  915365 config.go:182] Loaded profile config "addons-054604": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 10:14:06.411396  915365 host.go:66] Checking if "addons-054604" exists ...
	I1213 10:14:06.411926  915365 cli_runner.go:164] Run: docker container inspect addons-054604 --format={{.State.Status}}
	I1213 10:14:06.434881  915365 ssh_runner.go:195] Run: systemctl --version
	I1213 10:14:06.435024  915365 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054604
	I1213 10:14:06.457581  915365 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/addons-054604/id_rsa Username:docker}
	I1213 10:14:06.564149  915365 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 10:14:06.564256  915365 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 10:14:06.601350  915365 cri.go:89] found id: "9dfc412275d47430708bdd9da315ac44e2974752210e0b6c277cd82d7ab105d8"
	I1213 10:14:06.601381  915365 cri.go:89] found id: "411736ab35d3122ee9b77c4fc64fcbda3f988a45ef73863ff00fa52c4fcbb5c7"
	I1213 10:14:06.601387  915365 cri.go:89] found id: "fe7aa6350e217489746a35073aa2da5782e8ada1b47068f824336572cc33c246"
	I1213 10:14:06.601391  915365 cri.go:89] found id: "8a3729518104ccd9e11876774115da030c622637d97b6a762aa635c931085794"
	I1213 10:14:06.601395  915365 cri.go:89] found id: "10f4327e1d3d1b22a00362557e4a294a9b51185d58b7fe190a26ecec8dee2672"
	I1213 10:14:06.601399  915365 cri.go:89] found id: "1380fedb08b07c143b7e5940b5fea400bb730d0da239b5791d19f9a08901e231"
	I1213 10:14:06.601402  915365 cri.go:89] found id: "441a32eb57b513f29176ca9f6dd18328104a9b5fa79ee380cd08a41f7978ec90"
	I1213 10:14:06.601405  915365 cri.go:89] found id: "f871736240871de0f1ef464a002684e2ece515c0dfa8fd5f8d5b13b4e565c68e"
	I1213 10:14:06.601408  915365 cri.go:89] found id: "3ded6e57579bd0a8c2ad26ac6e93cbdb9c7b06cd00dbc61e1e85f832e73f085f"
	I1213 10:14:06.601414  915365 cri.go:89] found id: "b0e2d0e7e16b279110b53187dea2355419b23a459b2cf25f96d88c2db0f68d2b"
	I1213 10:14:06.601417  915365 cri.go:89] found id: "02565e8c756a3812567eee3588c54df43097a3c2581ba9063db5d5e26597a5cc"
	I1213 10:14:06.601420  915365 cri.go:89] found id: "17cb3c6d340024b2539323934bdce363102d990353d8c21c5c48b7842be6369c"
	I1213 10:14:06.601424  915365 cri.go:89] found id: "26436889f5cc97c312f42a74136b21c2c06338b4dfc8ef04436984ecf52e0137"
	I1213 10:14:06.601427  915365 cri.go:89] found id: "43da519983e66338f032f8e084a64f058e11aee1e710391af977a7cac3c7a851"
	I1213 10:14:06.601430  915365 cri.go:89] found id: "b95fb046aaf43fa10b6d2e5c93912f378f6234809f2725f67302ba08933bf075"
	I1213 10:14:06.601435  915365 cri.go:89] found id: "6211c2eaceea48cd7564d6e61228c6caae19ee3c9becf5796dfe85344142c6f9"
	I1213 10:14:06.601438  915365 cri.go:89] found id: "f5883fd88845b71596a62cc554ff445150ecbdc4f555d4ecde337e35133a26a6"
	I1213 10:14:06.601442  915365 cri.go:89] found id: "ef33020503a2d05204007d80967d03b004d2f713bb9d624b96f03468c0ea093d"
	I1213 10:14:06.601445  915365 cri.go:89] found id: "5add978c4ef1694390a3d23a377353da04049787988a6975f63db25d97f83d26"
	I1213 10:14:06.601447  915365 cri.go:89] found id: "dc808fcd2f20cbb36aefb288cc12021843e1d6fb5c3826f37451c82b9ec46a14"
	I1213 10:14:06.601452  915365 cri.go:89] found id: "20394cb8143630b89075746bcaf2fcc0ab2ad362bbfcfdd47a2cd53854bf8283"
	I1213 10:14:06.601456  915365 cri.go:89] found id: "e1f2fa7dc8f92abbea7fb441095c9aeec308a85e7b5d309ca12d373510309517"
	I1213 10:14:06.601460  915365 cri.go:89] found id: "3554210f6ef5f452792fd9b76f594ebd610b0877229d8a31c3107d175d62b9d0"
	I1213 10:14:06.601463  915365 cri.go:89] found id: ""
	I1213 10:14:06.601517  915365 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 10:14:06.619080  915365 out.go:203] 
	W1213 10:14:06.622417  915365 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T10:14:06Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T10:14:06Z" level=error msg="open /run/runc: no such file or directory"
	
	W1213 10:14:06.622449  915365 out.go:285] * 
	* 
	W1213 10:14:06.629902  915365 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 10:14:06.633318  915365 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable metrics-server addon: args "out/minikube-linux-arm64 -p addons-054604 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.39s)

                                                
                                    
x
+
TestAddons/parallel/CSI (53.99s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1213 10:13:49.917736  907484 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1213 10:13:49.921390  907484 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1213 10:13:49.921420  907484 kapi.go:107] duration metric: took 3.695215ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:551: csi-hostpath-driver pods stabilized in 3.706596ms
addons_test.go:554: (dbg) Run:  kubectl --context addons-054604 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:559: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-054604 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-054604 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-054604 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-054604 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-054604 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-054604 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-054604 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-054604 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-054604 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-054604 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-054604 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-054604 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-054604 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-054604 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-054604 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-054604 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-054604 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-054604 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-054604 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-054604 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-054604 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-054604 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-054604 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-054604 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:564: (dbg) Run:  kubectl --context addons-054604 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:353: "task-pv-pod" [d43f8eb9-6dda-4038-8e34-0dea2e1f1ac0] Pending
helpers_test.go:353: "task-pv-pod" [d43f8eb9-6dda-4038-8e34-0dea2e1f1ac0] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod" [d43f8eb9-6dda-4038-8e34-0dea2e1f1ac0] Running
addons_test.go:569: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 9.003570946s
addons_test.go:574: (dbg) Run:  kubectl --context addons-054604 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:428: (dbg) Run:  kubectl --context addons-054604 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:428: (dbg) Run:  kubectl --context addons-054604 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:584: (dbg) Run:  kubectl --context addons-054604 delete pod task-pv-pod
addons_test.go:584: (dbg) Done: kubectl --context addons-054604 delete pod task-pv-pod: (1.231703397s)
addons_test.go:590: (dbg) Run:  kubectl --context addons-054604 delete pvc hpvc
addons_test.go:596: (dbg) Run:  kubectl --context addons-054604 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:601: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-054604 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-054604 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-054604 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-054604 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-054604 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-054604 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-054604 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-054604 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-054604 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-054604 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-054604 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-054604 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:606: (dbg) Run:  kubectl --context addons-054604 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:353: "task-pv-pod-restore" [bda9eb5d-8837-4eec-9b48-57487df647bd] Pending
helpers_test.go:353: "task-pv-pod-restore" [bda9eb5d-8837-4eec-9b48-57487df647bd] Running
addons_test.go:611: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 6.003308251s
addons_test.go:616: (dbg) Run:  kubectl --context addons-054604 delete pod task-pv-pod-restore
addons_test.go:620: (dbg) Run:  kubectl --context addons-054604 delete pvc hpvc-restore
addons_test.go:624: (dbg) Run:  kubectl --context addons-054604 delete volumesnapshot new-snapshot-demo
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-054604 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-054604 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (295.611635ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 10:14:43.369244  916280 out.go:360] Setting OutFile to fd 1 ...
	I1213 10:14:43.370066  916280 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:14:43.370122  916280 out.go:374] Setting ErrFile to fd 2...
	I1213 10:14:43.370149  916280 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:14:43.370452  916280 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-904040/.minikube/bin
	I1213 10:14:43.370798  916280 mustload.go:66] Loading cluster: addons-054604
	I1213 10:14:43.371257  916280 config.go:182] Loaded profile config "addons-054604": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 10:14:43.371306  916280 addons.go:622] checking whether the cluster is paused
	I1213 10:14:43.371476  916280 config.go:182] Loaded profile config "addons-054604": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 10:14:43.371514  916280 host.go:66] Checking if "addons-054604" exists ...
	I1213 10:14:43.372239  916280 cli_runner.go:164] Run: docker container inspect addons-054604 --format={{.State.Status}}
	I1213 10:14:43.392592  916280 ssh_runner.go:195] Run: systemctl --version
	I1213 10:14:43.392656  916280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054604
	I1213 10:14:43.411552  916280 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/addons-054604/id_rsa Username:docker}
	I1213 10:14:43.522551  916280 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 10:14:43.522646  916280 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 10:14:43.569748  916280 cri.go:89] found id: "9dfc412275d47430708bdd9da315ac44e2974752210e0b6c277cd82d7ab105d8"
	I1213 10:14:43.569777  916280 cri.go:89] found id: "411736ab35d3122ee9b77c4fc64fcbda3f988a45ef73863ff00fa52c4fcbb5c7"
	I1213 10:14:43.569783  916280 cri.go:89] found id: "fe7aa6350e217489746a35073aa2da5782e8ada1b47068f824336572cc33c246"
	I1213 10:14:43.569795  916280 cri.go:89] found id: "8a3729518104ccd9e11876774115da030c622637d97b6a762aa635c931085794"
	I1213 10:14:43.569847  916280 cri.go:89] found id: "10f4327e1d3d1b22a00362557e4a294a9b51185d58b7fe190a26ecec8dee2672"
	I1213 10:14:43.569869  916280 cri.go:89] found id: "1380fedb08b07c143b7e5940b5fea400bb730d0da239b5791d19f9a08901e231"
	I1213 10:14:43.569912  916280 cri.go:89] found id: "441a32eb57b513f29176ca9f6dd18328104a9b5fa79ee380cd08a41f7978ec90"
	I1213 10:14:43.569941  916280 cri.go:89] found id: "f871736240871de0f1ef464a002684e2ece515c0dfa8fd5f8d5b13b4e565c68e"
	I1213 10:14:43.569957  916280 cri.go:89] found id: "3ded6e57579bd0a8c2ad26ac6e93cbdb9c7b06cd00dbc61e1e85f832e73f085f"
	I1213 10:14:43.569967  916280 cri.go:89] found id: "b0e2d0e7e16b279110b53187dea2355419b23a459b2cf25f96d88c2db0f68d2b"
	I1213 10:14:43.570001  916280 cri.go:89] found id: "02565e8c756a3812567eee3588c54df43097a3c2581ba9063db5d5e26597a5cc"
	I1213 10:14:43.570010  916280 cri.go:89] found id: "17cb3c6d340024b2539323934bdce363102d990353d8c21c5c48b7842be6369c"
	I1213 10:14:43.570013  916280 cri.go:89] found id: "26436889f5cc97c312f42a74136b21c2c06338b4dfc8ef04436984ecf52e0137"
	I1213 10:14:43.570016  916280 cri.go:89] found id: "43da519983e66338f032f8e084a64f058e11aee1e710391af977a7cac3c7a851"
	I1213 10:14:43.570058  916280 cri.go:89] found id: "b95fb046aaf43fa10b6d2e5c93912f378f6234809f2725f67302ba08933bf075"
	I1213 10:14:43.570075  916280 cri.go:89] found id: "6211c2eaceea48cd7564d6e61228c6caae19ee3c9becf5796dfe85344142c6f9"
	I1213 10:14:43.570079  916280 cri.go:89] found id: "f5883fd88845b71596a62cc554ff445150ecbdc4f555d4ecde337e35133a26a6"
	I1213 10:14:43.570084  916280 cri.go:89] found id: "ef33020503a2d05204007d80967d03b004d2f713bb9d624b96f03468c0ea093d"
	I1213 10:14:43.570087  916280 cri.go:89] found id: "5add978c4ef1694390a3d23a377353da04049787988a6975f63db25d97f83d26"
	I1213 10:14:43.570091  916280 cri.go:89] found id: "dc808fcd2f20cbb36aefb288cc12021843e1d6fb5c3826f37451c82b9ec46a14"
	I1213 10:14:43.570098  916280 cri.go:89] found id: "20394cb8143630b89075746bcaf2fcc0ab2ad362bbfcfdd47a2cd53854bf8283"
	I1213 10:14:43.570101  916280 cri.go:89] found id: "e1f2fa7dc8f92abbea7fb441095c9aeec308a85e7b5d309ca12d373510309517"
	I1213 10:14:43.570104  916280 cri.go:89] found id: "3554210f6ef5f452792fd9b76f594ebd610b0877229d8a31c3107d175d62b9d0"
	I1213 10:14:43.570107  916280 cri.go:89] found id: ""
	I1213 10:14:43.570193  916280 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 10:14:43.592417  916280 out.go:203] 
	W1213 10:14:43.595478  916280 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T10:14:43Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T10:14:43Z" level=error msg="open /run/runc: no such file or directory"
	
	W1213 10:14:43.595512  916280 out.go:285] * 
	* 
	W1213 10:14:43.603512  916280 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 10:14:43.606765  916280 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable volumesnapshots addon: args "out/minikube-linux-arm64 -p addons-054604 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-054604 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-054604 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (287.889599ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 10:14:43.693393  916330 out.go:360] Setting OutFile to fd 1 ...
	I1213 10:14:43.694300  916330 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:14:43.694316  916330 out.go:374] Setting ErrFile to fd 2...
	I1213 10:14:43.694322  916330 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:14:43.694599  916330 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-904040/.minikube/bin
	I1213 10:14:43.694950  916330 mustload.go:66] Loading cluster: addons-054604
	I1213 10:14:43.695338  916330 config.go:182] Loaded profile config "addons-054604": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 10:14:43.695357  916330 addons.go:622] checking whether the cluster is paused
	I1213 10:14:43.695466  916330 config.go:182] Loaded profile config "addons-054604": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 10:14:43.695481  916330 host.go:66] Checking if "addons-054604" exists ...
	I1213 10:14:43.695977  916330 cli_runner.go:164] Run: docker container inspect addons-054604 --format={{.State.Status}}
	I1213 10:14:43.714407  916330 ssh_runner.go:195] Run: systemctl --version
	I1213 10:14:43.714479  916330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054604
	I1213 10:14:43.733271  916330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/addons-054604/id_rsa Username:docker}
	I1213 10:14:43.836474  916330 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 10:14:43.836556  916330 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 10:14:43.867231  916330 cri.go:89] found id: "9dfc412275d47430708bdd9da315ac44e2974752210e0b6c277cd82d7ab105d8"
	I1213 10:14:43.867254  916330 cri.go:89] found id: "411736ab35d3122ee9b77c4fc64fcbda3f988a45ef73863ff00fa52c4fcbb5c7"
	I1213 10:14:43.867260  916330 cri.go:89] found id: "fe7aa6350e217489746a35073aa2da5782e8ada1b47068f824336572cc33c246"
	I1213 10:14:43.867265  916330 cri.go:89] found id: "8a3729518104ccd9e11876774115da030c622637d97b6a762aa635c931085794"
	I1213 10:14:43.867268  916330 cri.go:89] found id: "10f4327e1d3d1b22a00362557e4a294a9b51185d58b7fe190a26ecec8dee2672"
	I1213 10:14:43.867272  916330 cri.go:89] found id: "1380fedb08b07c143b7e5940b5fea400bb730d0da239b5791d19f9a08901e231"
	I1213 10:14:43.867275  916330 cri.go:89] found id: "441a32eb57b513f29176ca9f6dd18328104a9b5fa79ee380cd08a41f7978ec90"
	I1213 10:14:43.867278  916330 cri.go:89] found id: "f871736240871de0f1ef464a002684e2ece515c0dfa8fd5f8d5b13b4e565c68e"
	I1213 10:14:43.867281  916330 cri.go:89] found id: "3ded6e57579bd0a8c2ad26ac6e93cbdb9c7b06cd00dbc61e1e85f832e73f085f"
	I1213 10:14:43.867290  916330 cri.go:89] found id: "b0e2d0e7e16b279110b53187dea2355419b23a459b2cf25f96d88c2db0f68d2b"
	I1213 10:14:43.867293  916330 cri.go:89] found id: "02565e8c756a3812567eee3588c54df43097a3c2581ba9063db5d5e26597a5cc"
	I1213 10:14:43.867297  916330 cri.go:89] found id: "17cb3c6d340024b2539323934bdce363102d990353d8c21c5c48b7842be6369c"
	I1213 10:14:43.867301  916330 cri.go:89] found id: "26436889f5cc97c312f42a74136b21c2c06338b4dfc8ef04436984ecf52e0137"
	I1213 10:14:43.867305  916330 cri.go:89] found id: "43da519983e66338f032f8e084a64f058e11aee1e710391af977a7cac3c7a851"
	I1213 10:14:43.867308  916330 cri.go:89] found id: "b95fb046aaf43fa10b6d2e5c93912f378f6234809f2725f67302ba08933bf075"
	I1213 10:14:43.867313  916330 cri.go:89] found id: "6211c2eaceea48cd7564d6e61228c6caae19ee3c9becf5796dfe85344142c6f9"
	I1213 10:14:43.867316  916330 cri.go:89] found id: "f5883fd88845b71596a62cc554ff445150ecbdc4f555d4ecde337e35133a26a6"
	I1213 10:14:43.867326  916330 cri.go:89] found id: "ef33020503a2d05204007d80967d03b004d2f713bb9d624b96f03468c0ea093d"
	I1213 10:14:43.867330  916330 cri.go:89] found id: "5add978c4ef1694390a3d23a377353da04049787988a6975f63db25d97f83d26"
	I1213 10:14:43.867333  916330 cri.go:89] found id: "dc808fcd2f20cbb36aefb288cc12021843e1d6fb5c3826f37451c82b9ec46a14"
	I1213 10:14:43.867337  916330 cri.go:89] found id: "20394cb8143630b89075746bcaf2fcc0ab2ad362bbfcfdd47a2cd53854bf8283"
	I1213 10:14:43.867340  916330 cri.go:89] found id: "e1f2fa7dc8f92abbea7fb441095c9aeec308a85e7b5d309ca12d373510309517"
	I1213 10:14:43.867344  916330 cri.go:89] found id: "3554210f6ef5f452792fd9b76f594ebd610b0877229d8a31c3107d175d62b9d0"
	I1213 10:14:43.867347  916330 cri.go:89] found id: ""
	I1213 10:14:43.867397  916330 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 10:14:43.885477  916330 out.go:203] 
	W1213 10:14:43.888586  916330 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T10:14:43Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T10:14:43Z" level=error msg="open /run/runc: no such file or directory"
	
	W1213 10:14:43.888614  916330 out.go:285] * 
	* 
	W1213 10:14:43.896361  916330 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 10:14:43.899559  916330 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-arm64 -p addons-054604 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (53.99s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (3.28s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:810: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-054604 --alsologtostderr -v=1
addons_test.go:810: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable headlamp -p addons-054604 --alsologtostderr -v=1: exit status 11 (267.227233ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 10:13:46.691035  914549 out.go:360] Setting OutFile to fd 1 ...
	I1213 10:13:46.692277  914549 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:13:46.692325  914549 out.go:374] Setting ErrFile to fd 2...
	I1213 10:13:46.692349  914549 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:13:46.692656  914549 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-904040/.minikube/bin
	I1213 10:13:46.693008  914549 mustload.go:66] Loading cluster: addons-054604
	I1213 10:13:46.693429  914549 config.go:182] Loaded profile config "addons-054604": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 10:13:46.693467  914549 addons.go:622] checking whether the cluster is paused
	I1213 10:13:46.693768  914549 config.go:182] Loaded profile config "addons-054604": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 10:13:46.693807  914549 host.go:66] Checking if "addons-054604" exists ...
	I1213 10:13:46.694355  914549 cli_runner.go:164] Run: docker container inspect addons-054604 --format={{.State.Status}}
	I1213 10:13:46.711971  914549 ssh_runner.go:195] Run: systemctl --version
	I1213 10:13:46.712029  914549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054604
	I1213 10:13:46.730178  914549 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/addons-054604/id_rsa Username:docker}
	I1213 10:13:46.837605  914549 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 10:13:46.837704  914549 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 10:13:46.869150  914549 cri.go:89] found id: "9dfc412275d47430708bdd9da315ac44e2974752210e0b6c277cd82d7ab105d8"
	I1213 10:13:46.869226  914549 cri.go:89] found id: "411736ab35d3122ee9b77c4fc64fcbda3f988a45ef73863ff00fa52c4fcbb5c7"
	I1213 10:13:46.869248  914549 cri.go:89] found id: "fe7aa6350e217489746a35073aa2da5782e8ada1b47068f824336572cc33c246"
	I1213 10:13:46.869272  914549 cri.go:89] found id: "8a3729518104ccd9e11876774115da030c622637d97b6a762aa635c931085794"
	I1213 10:13:46.869308  914549 cri.go:89] found id: "10f4327e1d3d1b22a00362557e4a294a9b51185d58b7fe190a26ecec8dee2672"
	I1213 10:13:46.869333  914549 cri.go:89] found id: "1380fedb08b07c143b7e5940b5fea400bb730d0da239b5791d19f9a08901e231"
	I1213 10:13:46.869355  914549 cri.go:89] found id: "441a32eb57b513f29176ca9f6dd18328104a9b5fa79ee380cd08a41f7978ec90"
	I1213 10:13:46.869391  914549 cri.go:89] found id: "f871736240871de0f1ef464a002684e2ece515c0dfa8fd5f8d5b13b4e565c68e"
	I1213 10:13:46.869413  914549 cri.go:89] found id: "3ded6e57579bd0a8c2ad26ac6e93cbdb9c7b06cd00dbc61e1e85f832e73f085f"
	I1213 10:13:46.869445  914549 cri.go:89] found id: "b0e2d0e7e16b279110b53187dea2355419b23a459b2cf25f96d88c2db0f68d2b"
	I1213 10:13:46.869477  914549 cri.go:89] found id: "02565e8c756a3812567eee3588c54df43097a3c2581ba9063db5d5e26597a5cc"
	I1213 10:13:46.869501  914549 cri.go:89] found id: "17cb3c6d340024b2539323934bdce363102d990353d8c21c5c48b7842be6369c"
	I1213 10:13:46.869523  914549 cri.go:89] found id: "26436889f5cc97c312f42a74136b21c2c06338b4dfc8ef04436984ecf52e0137"
	I1213 10:13:46.869583  914549 cri.go:89] found id: "43da519983e66338f032f8e084a64f058e11aee1e710391af977a7cac3c7a851"
	I1213 10:13:46.869605  914549 cri.go:89] found id: "b95fb046aaf43fa10b6d2e5c93912f378f6234809f2725f67302ba08933bf075"
	I1213 10:13:46.869628  914549 cri.go:89] found id: "6211c2eaceea48cd7564d6e61228c6caae19ee3c9becf5796dfe85344142c6f9"
	I1213 10:13:46.869633  914549 cri.go:89] found id: "f5883fd88845b71596a62cc554ff445150ecbdc4f555d4ecde337e35133a26a6"
	I1213 10:13:46.869664  914549 cri.go:89] found id: "ef33020503a2d05204007d80967d03b004d2f713bb9d624b96f03468c0ea093d"
	I1213 10:13:46.869679  914549 cri.go:89] found id: "5add978c4ef1694390a3d23a377353da04049787988a6975f63db25d97f83d26"
	I1213 10:13:46.869683  914549 cri.go:89] found id: "dc808fcd2f20cbb36aefb288cc12021843e1d6fb5c3826f37451c82b9ec46a14"
	I1213 10:13:46.869689  914549 cri.go:89] found id: "20394cb8143630b89075746bcaf2fcc0ab2ad362bbfcfdd47a2cd53854bf8283"
	I1213 10:13:46.869693  914549 cri.go:89] found id: "e1f2fa7dc8f92abbea7fb441095c9aeec308a85e7b5d309ca12d373510309517"
	I1213 10:13:46.869696  914549 cri.go:89] found id: "3554210f6ef5f452792fd9b76f594ebd610b0877229d8a31c3107d175d62b9d0"
	I1213 10:13:46.869699  914549 cri.go:89] found id: ""
	I1213 10:13:46.869778  914549 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 10:13:46.886418  914549 out.go:203] 
	W1213 10:13:46.889272  914549 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T10:13:46Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T10:13:46Z" level=error msg="open /run/runc: no such file or directory"
	
	W1213 10:13:46.889291  914549 out.go:285] * 
	* 
	W1213 10:13:46.896753  914549 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 10:13:46.900098  914549 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:812: failed to enable headlamp addon: args: "out/minikube-linux-arm64 addons enable headlamp -p addons-054604 --alsologtostderr -v=1": exit status 11
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect addons-054604
helpers_test.go:244: (dbg) docker inspect addons-054604:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "218e0b6bff8529458d50df21ae3b67480ee0457432734dc8a39716faf5b2e157",
	        "Created": "2025-12-13T10:11:22.946692325Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 908862,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T10:11:23.023865917Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/218e0b6bff8529458d50df21ae3b67480ee0457432734dc8a39716faf5b2e157/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/218e0b6bff8529458d50df21ae3b67480ee0457432734dc8a39716faf5b2e157/hostname",
	        "HostsPath": "/var/lib/docker/containers/218e0b6bff8529458d50df21ae3b67480ee0457432734dc8a39716faf5b2e157/hosts",
	        "LogPath": "/var/lib/docker/containers/218e0b6bff8529458d50df21ae3b67480ee0457432734dc8a39716faf5b2e157/218e0b6bff8529458d50df21ae3b67480ee0457432734dc8a39716faf5b2e157-json.log",
	        "Name": "/addons-054604",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-054604:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-054604",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "218e0b6bff8529458d50df21ae3b67480ee0457432734dc8a39716faf5b2e157",
	                "LowerDir": "/var/lib/docker/overlay2/4063f45ad3ed67ea0721bc07c2541d19d56758bb4ff3400c87130d0b4615befa-init/diff:/var/lib/docker/overlay2/ae644fe0cc2841f5eea1cee1fab5fa62406b5368ff2c4f1e7ca42815e94a37ad/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4063f45ad3ed67ea0721bc07c2541d19d56758bb4ff3400c87130d0b4615befa/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4063f45ad3ed67ea0721bc07c2541d19d56758bb4ff3400c87130d0b4615befa/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4063f45ad3ed67ea0721bc07c2541d19d56758bb4ff3400c87130d0b4615befa/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-054604",
	                "Source": "/var/lib/docker/volumes/addons-054604/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-054604",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-054604",
	                "name.minikube.sigs.k8s.io": "addons-054604",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f3db1d6b6f368ceb4716970ec9fd435d05a011f4975272efc2129da757494a4c",
	            "SandboxKey": "/var/run/docker/netns/f3db1d6b6f36",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33508"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33509"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33512"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33510"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33511"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-054604": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ba:50:02:0f:dd:c4",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9a8e1f2933777fab31ca9aef9d89fb8e54c41a232069e86fd3fdde7c2068c9f7",
	                    "EndpointID": "a7a467f80d7425e76e6b8f66ccc4ad895f571ca53eead719cd6789557e127d57",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-054604",
	                        "218e0b6bff85"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-054604 -n addons-054604
helpers_test.go:253: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p addons-054604 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p addons-054604 logs -n 25: (1.607583254s)
helpers_test.go:261: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-660868 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-660868   │ jenkins │ v1.37.0 │ 13 Dec 25 10:10 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 13 Dec 25 10:11 UTC │ 13 Dec 25 10:11 UTC │
	│ delete  │ -p download-only-660868                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-660868   │ jenkins │ v1.37.0 │ 13 Dec 25 10:11 UTC │ 13 Dec 25 10:11 UTC │
	│ start   │ -o=json --download-only -p download-only-083474 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-083474   │ jenkins │ v1.37.0 │ 13 Dec 25 10:11 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 13 Dec 25 10:11 UTC │ 13 Dec 25 10:11 UTC │
	│ delete  │ -p download-only-083474                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-083474   │ jenkins │ v1.37.0 │ 13 Dec 25 10:11 UTC │ 13 Dec 25 10:11 UTC │
	│ start   │ -o=json --download-only -p download-only-708534 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                         │ download-only-708534   │ jenkins │ v1.37.0 │ 13 Dec 25 10:11 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 13 Dec 25 10:11 UTC │ 13 Dec 25 10:11 UTC │
	│ delete  │ -p download-only-708534                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-708534   │ jenkins │ v1.37.0 │ 13 Dec 25 10:11 UTC │ 13 Dec 25 10:11 UTC │
	│ delete  │ -p download-only-660868                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-660868   │ jenkins │ v1.37.0 │ 13 Dec 25 10:11 UTC │ 13 Dec 25 10:11 UTC │
	│ delete  │ -p download-only-083474                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-083474   │ jenkins │ v1.37.0 │ 13 Dec 25 10:11 UTC │ 13 Dec 25 10:11 UTC │
	│ delete  │ -p download-only-708534                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-708534   │ jenkins │ v1.37.0 │ 13 Dec 25 10:11 UTC │ 13 Dec 25 10:11 UTC │
	│ start   │ --download-only -p download-docker-457146 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-457146 │ jenkins │ v1.37.0 │ 13 Dec 25 10:11 UTC │                     │
	│ delete  │ -p download-docker-457146                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-457146 │ jenkins │ v1.37.0 │ 13 Dec 25 10:11 UTC │ 13 Dec 25 10:11 UTC │
	│ start   │ --download-only -p binary-mirror-558778 --alsologtostderr --binary-mirror http://127.0.0.1:34767 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-558778   │ jenkins │ v1.37.0 │ 13 Dec 25 10:11 UTC │                     │
	│ delete  │ -p binary-mirror-558778                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-558778   │ jenkins │ v1.37.0 │ 13 Dec 25 10:11 UTC │ 13 Dec 25 10:11 UTC │
	│ addons  │ enable dashboard -p addons-054604                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-054604          │ jenkins │ v1.37.0 │ 13 Dec 25 10:11 UTC │                     │
	│ addons  │ disable dashboard -p addons-054604                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-054604          │ jenkins │ v1.37.0 │ 13 Dec 25 10:11 UTC │                     │
	│ start   │ -p addons-054604 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-054604          │ jenkins │ v1.37.0 │ 13 Dec 25 10:11 UTC │ 13 Dec 25 10:13 UTC │
	│ addons  │ addons-054604 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-054604          │ jenkins │ v1.37.0 │ 13 Dec 25 10:13 UTC │                     │
	│ addons  │ addons-054604 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-054604          │ jenkins │ v1.37.0 │ 13 Dec 25 10:13 UTC │                     │
	│ addons  │ enable headlamp -p addons-054604 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-054604          │ jenkins │ v1.37.0 │ 13 Dec 25 10:13 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 10:11:15
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 10:11:15.986942  908469 out.go:360] Setting OutFile to fd 1 ...
	I1213 10:11:15.987076  908469 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:11:15.987086  908469 out.go:374] Setting ErrFile to fd 2...
	I1213 10:11:15.987092  908469 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:11:15.987335  908469 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-904040/.minikube/bin
	I1213 10:11:15.987765  908469 out.go:368] Setting JSON to false
	I1213 10:11:15.988600  908469 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":17625,"bootTime":1765603051,"procs":158,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1213 10:11:15.988669  908469 start.go:143] virtualization:  
	I1213 10:11:15.992589  908469 out.go:179] * [addons-054604] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 10:11:15.995844  908469 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 10:11:15.995932  908469 notify.go:221] Checking for updates...
	I1213 10:11:16.003168  908469 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 10:11:16.007097  908469 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22128-904040/kubeconfig
	I1213 10:11:16.010417  908469 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-904040/.minikube
	I1213 10:11:16.013503  908469 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 10:11:16.016685  908469 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 10:11:16.019977  908469 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 10:11:16.051149  908469 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 10:11:16.051295  908469 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:11:16.109237  908469 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-13 10:11:16.099536235 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:11:16.109351  908469 docker.go:319] overlay module found
	I1213 10:11:16.112400  908469 out.go:179] * Using the docker driver based on user configuration
	I1213 10:11:16.115149  908469 start.go:309] selected driver: docker
	I1213 10:11:16.115168  908469 start.go:927] validating driver "docker" against <nil>
	I1213 10:11:16.115182  908469 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 10:11:16.115911  908469 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:11:16.167793  908469 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-13 10:11:16.158670607 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:11:16.167948  908469 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1213 10:11:16.168176  908469 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 10:11:16.171230  908469 out.go:179] * Using Docker driver with root privileges
	I1213 10:11:16.174202  908469 cni.go:84] Creating CNI manager for ""
	I1213 10:11:16.174268  908469 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 10:11:16.174279  908469 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1213 10:11:16.174357  908469 start.go:353] cluster config:
	{Name:addons-054604 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-054604 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1213 10:11:16.177412  908469 out.go:179] * Starting "addons-054604" primary control-plane node in "addons-054604" cluster
	I1213 10:11:16.180228  908469 cache.go:134] Beginning downloading kic base image for docker with crio
	I1213 10:11:16.183111  908469 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 10:11:16.185987  908469 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 10:11:16.186032  908469 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 10:11:16.186038  908469 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22128-904040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1213 10:11:16.186061  908469 cache.go:65] Caching tarball of preloaded images
	I1213 10:11:16.186152  908469 preload.go:238] Found /home/jenkins/minikube-integration/22128-904040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1213 10:11:16.186162  908469 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1213 10:11:16.186519  908469 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/addons-054604/config.json ...
	I1213 10:11:16.186550  908469 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/addons-054604/config.json: {Name:mka40e27ef638482f1994511ba19eb0581f749b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:11:16.205346  908469 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 10:11:16.205368  908469 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 10:11:16.205388  908469 cache.go:243] Successfully downloaded all kic artifacts
	I1213 10:11:16.205429  908469 start.go:360] acquireMachinesLock for addons-054604: {Name:mkf7dc8f8e3dcd32bb06bccf10d7da8a028997c7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 10:11:16.205565  908469 start.go:364] duration metric: took 113.782µs to acquireMachinesLock for "addons-054604"
	I1213 10:11:16.205623  908469 start.go:93] Provisioning new machine with config: &{Name:addons-054604 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-054604 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 10:11:16.205709  908469 start.go:125] createHost starting for "" (driver="docker")
	I1213 10:11:16.210941  908469 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1213 10:11:16.211214  908469 start.go:159] libmachine.API.Create for "addons-054604" (driver="docker")
	I1213 10:11:16.211254  908469 client.go:173] LocalClient.Create starting
	I1213 10:11:16.211366  908469 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca.pem
	I1213 10:11:16.391638  908469 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/cert.pem
	I1213 10:11:16.586284  908469 cli_runner.go:164] Run: docker network inspect addons-054604 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1213 10:11:16.602945  908469 cli_runner.go:211] docker network inspect addons-054604 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1213 10:11:16.603038  908469 network_create.go:284] running [docker network inspect addons-054604] to gather additional debugging logs...
	I1213 10:11:16.603061  908469 cli_runner.go:164] Run: docker network inspect addons-054604
	W1213 10:11:16.619370  908469 cli_runner.go:211] docker network inspect addons-054604 returned with exit code 1
	I1213 10:11:16.619434  908469 network_create.go:287] error running [docker network inspect addons-054604]: docker network inspect addons-054604: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-054604 not found
	I1213 10:11:16.619452  908469 network_create.go:289] output of [docker network inspect addons-054604]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-054604 not found
	
	** /stderr **
	I1213 10:11:16.619558  908469 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 10:11:16.636487  908469 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a546e0}
	I1213 10:11:16.636530  908469 network_create.go:124] attempt to create docker network addons-054604 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1213 10:11:16.636595  908469 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-054604 addons-054604
	I1213 10:11:16.695687  908469 network_create.go:108] docker network addons-054604 192.168.49.0/24 created
	I1213 10:11:16.695738  908469 kic.go:121] calculated static IP "192.168.49.2" for the "addons-054604" container
	I1213 10:11:16.695823  908469 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1213 10:11:16.712374  908469 cli_runner.go:164] Run: docker volume create addons-054604 --label name.minikube.sigs.k8s.io=addons-054604 --label created_by.minikube.sigs.k8s.io=true
	I1213 10:11:16.730051  908469 oci.go:103] Successfully created a docker volume addons-054604
	I1213 10:11:16.730134  908469 cli_runner.go:164] Run: docker run --rm --name addons-054604-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-054604 --entrypoint /usr/bin/test -v addons-054604:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1213 10:11:18.807819  908469 cli_runner.go:217] Completed: docker run --rm --name addons-054604-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-054604 --entrypoint /usr/bin/test -v addons-054604:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib: (2.077646129s)
	I1213 10:11:18.807853  908469 oci.go:107] Successfully prepared a docker volume addons-054604
	I1213 10:11:18.807892  908469 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 10:11:18.807904  908469 kic.go:194] Starting extracting preloaded images to volume ...
	I1213 10:11:18.807982  908469 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22128-904040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-054604:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
	I1213 10:11:22.878705  908469 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22128-904040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-054604:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (4.070678289s)
	I1213 10:11:22.878736  908469 kic.go:203] duration metric: took 4.070828387s to extract preloaded images to volume ...
	W1213 10:11:22.878900  908469 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1213 10:11:22.879012  908469 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1213 10:11:22.932064  908469 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-054604 --name addons-054604 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-054604 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-054604 --network addons-054604 --ip 192.168.49.2 --volume addons-054604:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1213 10:11:23.263191  908469 cli_runner.go:164] Run: docker container inspect addons-054604 --format={{.State.Running}}
	I1213 10:11:23.285327  908469 cli_runner.go:164] Run: docker container inspect addons-054604 --format={{.State.Status}}
	I1213 10:11:23.315372  908469 cli_runner.go:164] Run: docker exec addons-054604 stat /var/lib/dpkg/alternatives/iptables
	I1213 10:11:23.369758  908469 oci.go:144] the created container "addons-054604" has a running status.
	I1213 10:11:23.369789  908469 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22128-904040/.minikube/machines/addons-054604/id_rsa...
	I1213 10:11:23.537886  908469 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22128-904040/.minikube/machines/addons-054604/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1213 10:11:23.563400  908469 cli_runner.go:164] Run: docker container inspect addons-054604 --format={{.State.Status}}
	I1213 10:11:23.595351  908469 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1213 10:11:23.595378  908469 kic_runner.go:114] Args: [docker exec --privileged addons-054604 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1213 10:11:23.664690  908469 cli_runner.go:164] Run: docker container inspect addons-054604 --format={{.State.Status}}
	I1213 10:11:23.687150  908469 machine.go:94] provisionDockerMachine start ...
	I1213 10:11:23.687260  908469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054604
	I1213 10:11:23.712803  908469 main.go:143] libmachine: Using SSH client type: native
	I1213 10:11:23.713120  908469 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33508 <nil> <nil>}
	I1213 10:11:23.713128  908469 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 10:11:23.713731  908469 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50672->127.0.0.1:33508: read: connection reset by peer
	I1213 10:11:26.865447  908469 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-054604
	
	I1213 10:11:26.865481  908469 ubuntu.go:182] provisioning hostname "addons-054604"
	I1213 10:11:26.865582  908469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054604
	I1213 10:11:26.883311  908469 main.go:143] libmachine: Using SSH client type: native
	I1213 10:11:26.883662  908469 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33508 <nil> <nil>}
	I1213 10:11:26.883674  908469 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-054604 && echo "addons-054604" | sudo tee /etc/hostname
	I1213 10:11:27.043713  908469 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-054604
	
	I1213 10:11:27.043814  908469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054604
	I1213 10:11:27.061851  908469 main.go:143] libmachine: Using SSH client type: native
	I1213 10:11:27.062191  908469 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33508 <nil> <nil>}
	I1213 10:11:27.062216  908469 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-054604' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-054604/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-054604' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 10:11:27.213982  908469 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 10:11:27.214011  908469 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22128-904040/.minikube CaCertPath:/home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22128-904040/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22128-904040/.minikube}
	I1213 10:11:27.214045  908469 ubuntu.go:190] setting up certificates
	I1213 10:11:27.214066  908469 provision.go:84] configureAuth start
	I1213 10:11:27.214134  908469 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-054604
	I1213 10:11:27.230677  908469 provision.go:143] copyHostCerts
	I1213 10:11:27.230773  908469 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22128-904040/.minikube/ca.pem (1082 bytes)
	I1213 10:11:27.230902  908469 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22128-904040/.minikube/cert.pem (1123 bytes)
	I1213 10:11:27.230964  908469 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22128-904040/.minikube/key.pem (1675 bytes)
	I1213 10:11:27.231019  908469 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22128-904040/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca-key.pem org=jenkins.addons-054604 san=[127.0.0.1 192.168.49.2 addons-054604 localhost minikube]
	I1213 10:11:27.531851  908469 provision.go:177] copyRemoteCerts
	I1213 10:11:27.531934  908469 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 10:11:27.531975  908469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054604
	I1213 10:11:27.549579  908469 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/addons-054604/id_rsa Username:docker}
	I1213 10:11:27.653412  908469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 10:11:27.671161  908469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 10:11:27.688998  908469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1213 10:11:27.706451  908469 provision.go:87] duration metric: took 492.356677ms to configureAuth
	I1213 10:11:27.706482  908469 ubuntu.go:206] setting minikube options for container-runtime
	I1213 10:11:27.706674  908469 config.go:182] Loaded profile config "addons-054604": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 10:11:27.706788  908469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054604
	I1213 10:11:27.723681  908469 main.go:143] libmachine: Using SSH client type: native
	I1213 10:11:27.724032  908469 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33508 <nil> <nil>}
	I1213 10:11:27.724051  908469 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 10:11:28.165728  908469 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 10:11:28.165765  908469 machine.go:97] duration metric: took 4.478582135s to provisionDockerMachine
	I1213 10:11:28.165776  908469 client.go:176] duration metric: took 11.954512595s to LocalClient.Create
	I1213 10:11:28.165790  908469 start.go:167] duration metric: took 11.954576842s to libmachine.API.Create "addons-054604"
	I1213 10:11:28.165797  908469 start.go:293] postStartSetup for "addons-054604" (driver="docker")
	I1213 10:11:28.165807  908469 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 10:11:28.165891  908469 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 10:11:28.165935  908469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054604
	I1213 10:11:28.184151  908469 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/addons-054604/id_rsa Username:docker}
	I1213 10:11:28.289163  908469 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 10:11:28.292356  908469 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 10:11:28.292389  908469 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 10:11:28.292403  908469 filesync.go:126] Scanning /home/jenkins/minikube-integration/22128-904040/.minikube/addons for local assets ...
	I1213 10:11:28.292470  908469 filesync.go:126] Scanning /home/jenkins/minikube-integration/22128-904040/.minikube/files for local assets ...
	I1213 10:11:28.292499  908469 start.go:296] duration metric: took 126.696518ms for postStartSetup
	I1213 10:11:28.292816  908469 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-054604
	I1213 10:11:28.309666  908469 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/addons-054604/config.json ...
	I1213 10:11:28.309961  908469 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 10:11:28.310011  908469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054604
	I1213 10:11:28.326796  908469 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/addons-054604/id_rsa Username:docker}
	I1213 10:11:28.430594  908469 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 10:11:28.435144  908469 start.go:128] duration metric: took 12.229419889s to createHost
	I1213 10:11:28.435174  908469 start.go:83] releasing machines lock for "addons-054604", held for 12.229570422s
	I1213 10:11:28.435245  908469 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-054604
	I1213 10:11:28.451771  908469 ssh_runner.go:195] Run: cat /version.json
	I1213 10:11:28.451804  908469 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 10:11:28.451823  908469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054604
	I1213 10:11:28.451857  908469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054604
	I1213 10:11:28.473169  908469 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/addons-054604/id_rsa Username:docker}
	I1213 10:11:28.483518  908469 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/addons-054604/id_rsa Username:docker}
	I1213 10:11:28.664257  908469 ssh_runner.go:195] Run: systemctl --version
	I1213 10:11:28.670875  908469 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 10:11:28.716450  908469 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 10:11:28.720676  908469 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 10:11:28.720816  908469 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 10:11:28.748799  908469 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1213 10:11:28.748834  908469 start.go:496] detecting cgroup driver to use...
	I1213 10:11:28.748868  908469 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 10:11:28.748927  908469 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 10:11:28.765590  908469 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 10:11:28.779112  908469 docker.go:218] disabling cri-docker service (if available) ...
	I1213 10:11:28.779177  908469 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 10:11:28.797176  908469 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 10:11:28.815953  908469 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 10:11:28.936324  908469 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 10:11:29.056414  908469 docker.go:234] disabling docker service ...
	I1213 10:11:29.056535  908469 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 10:11:29.079427  908469 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 10:11:29.093062  908469 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 10:11:29.202060  908469 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 10:11:29.321704  908469 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 10:11:29.334540  908469 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 10:11:29.349120  908469 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 10:11:29.349186  908469 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:11:29.358108  908469 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 10:11:29.358191  908469 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:11:29.367488  908469 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:11:29.375826  908469 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:11:29.384816  908469 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 10:11:29.392790  908469 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:11:29.401425  908469 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:11:29.415082  908469 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:11:29.423865  908469 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 10:11:29.431260  908469 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 10:11:29.438673  908469 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:11:29.563148  908469 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 10:11:29.760125  908469 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 10:11:29.760246  908469 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 10:11:29.763780  908469 start.go:564] Will wait 60s for crictl version
	I1213 10:11:29.763859  908469 ssh_runner.go:195] Run: which crictl
	I1213 10:11:29.767171  908469 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 10:11:29.797081  908469 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1213 10:11:29.797212  908469 ssh_runner.go:195] Run: crio --version
	I1213 10:11:29.827143  908469 ssh_runner.go:195] Run: crio --version
	I1213 10:11:29.864448  908469 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1213 10:11:29.867207  908469 cli_runner.go:164] Run: docker network inspect addons-054604 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 10:11:29.884089  908469 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1213 10:11:29.888172  908469 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 10:11:29.898627  908469 kubeadm.go:884] updating cluster {Name:addons-054604 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-054604 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 10:11:29.898757  908469 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 10:11:29.898851  908469 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 10:11:29.938612  908469 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 10:11:29.938640  908469 crio.go:433] Images already preloaded, skipping extraction
	I1213 10:11:29.938700  908469 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 10:11:29.969010  908469 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 10:11:29.969034  908469 cache_images.go:86] Images are preloaded, skipping loading
	I1213 10:11:29.969042  908469 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.2 crio true true} ...
	I1213 10:11:29.969141  908469 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-054604 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:addons-054604 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 10:11:29.969235  908469 ssh_runner.go:195] Run: crio config
	I1213 10:11:30.053947  908469 cni.go:84] Creating CNI manager for ""
	I1213 10:11:30.053987  908469 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 10:11:30.054005  908469 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 10:11:30.054061  908469 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-054604 NodeName:addons-054604 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 10:11:30.054255  908469 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-054604"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 10:11:30.054386  908469 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1213 10:11:30.064191  908469 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 10:11:30.064288  908469 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 10:11:30.075251  908469 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1213 10:11:30.091710  908469 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 10:11:30.107458  908469 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1213 10:11:30.123050  908469 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1213 10:11:30.127378  908469 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 10:11:30.139504  908469 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:11:30.263355  908469 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 10:11:30.279385  908469 certs.go:69] Setting up /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/addons-054604 for IP: 192.168.49.2
	I1213 10:11:30.279460  908469 certs.go:195] generating shared ca certs ...
	I1213 10:11:30.279491  908469 certs.go:227] acquiring lock for ca certs: {Name:mk8a4f8a0a31c02fdf751ce601bdbbea6f5a03e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:11:30.279662  908469 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22128-904040/.minikube/ca.key
	I1213 10:11:30.809770  908469 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22128-904040/.minikube/ca.crt ...
	I1213 10:11:30.809804  908469 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-904040/.minikube/ca.crt: {Name:mke9af723c1802ab5f9881f377ee1cc145a10625 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:11:30.810039  908469 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22128-904040/.minikube/ca.key ...
	I1213 10:11:30.810056  908469 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-904040/.minikube/ca.key: {Name:mk35b68be8531b2f3c3930895b2758ea9f2d9c3b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:11:30.810147  908469 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22128-904040/.minikube/proxy-client-ca.key
	I1213 10:11:30.986161  908469 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22128-904040/.minikube/proxy-client-ca.crt ...
	I1213 10:11:30.986195  908469 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-904040/.minikube/proxy-client-ca.crt: {Name:mk05ef8b7a67caf7d58435e6dc3055b3f8800763 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:11:30.986374  908469 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22128-904040/.minikube/proxy-client-ca.key ...
	I1213 10:11:30.986391  908469 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-904040/.minikube/proxy-client-ca.key: {Name:mk4187a64c78da2cf099426e1ad8e6cb90229bc7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:11:30.986467  908469 certs.go:257] generating profile certs ...
	I1213 10:11:30.986529  908469 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/addons-054604/client.key
	I1213 10:11:30.986549  908469 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/addons-054604/client.crt with IP's: []
	I1213 10:11:31.175112  908469 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/addons-054604/client.crt ...
	I1213 10:11:31.175147  908469 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/addons-054604/client.crt: {Name:mk05007e0f0f8a2cee63a7e5c259d597b9174c9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:11:31.175347  908469 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/addons-054604/client.key ...
	I1213 10:11:31.175362  908469 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/addons-054604/client.key: {Name:mk026d4ab665fd6d0f8cd3a2cfb67ffe0df375e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:11:31.175450  908469 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/addons-054604/apiserver.key.f18dc9ff
	I1213 10:11:31.175480  908469 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/addons-054604/apiserver.crt.f18dc9ff with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1213 10:11:31.323609  908469 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/addons-054604/apiserver.crt.f18dc9ff ...
	I1213 10:11:31.323643  908469 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/addons-054604/apiserver.crt.f18dc9ff: {Name:mk46ba298c7b9377bbae5f93060762fcd3f2448a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:11:31.323827  908469 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/addons-054604/apiserver.key.f18dc9ff ...
	I1213 10:11:31.323842  908469 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/addons-054604/apiserver.key.f18dc9ff: {Name:mk266a0ccecc3d5157687879e70164ea26a8f1b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:11:31.323942  908469 certs.go:382] copying /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/addons-054604/apiserver.crt.f18dc9ff -> /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/addons-054604/apiserver.crt
	I1213 10:11:31.324035  908469 certs.go:386] copying /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/addons-054604/apiserver.key.f18dc9ff -> /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/addons-054604/apiserver.key
	I1213 10:11:31.324093  908469 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/addons-054604/proxy-client.key
	I1213 10:11:31.324112  908469 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/addons-054604/proxy-client.crt with IP's: []
	I1213 10:11:31.502137  908469 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/addons-054604/proxy-client.crt ...
	I1213 10:11:31.502170  908469 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/addons-054604/proxy-client.crt: {Name:mk4628c1ee88d6ec7065762b64d62c762b9a6b0f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:11:31.502367  908469 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/addons-054604/proxy-client.key ...
	I1213 10:11:31.502381  908469 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/addons-054604/proxy-client.key: {Name:mk688ca218594e35f8f3b894ae5d1e13e60f38d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:11:31.502582  908469 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 10:11:31.502632  908469 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca.pem (1082 bytes)
	I1213 10:11:31.502659  908469 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/cert.pem (1123 bytes)
	I1213 10:11:31.502689  908469 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/key.pem (1675 bytes)
	I1213 10:11:31.503304  908469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 10:11:31.522438  908469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 10:11:31.541125  908469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 10:11:31.559408  908469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 10:11:31.577675  908469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/addons-054604/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1213 10:11:31.594965  908469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/addons-054604/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 10:11:31.612927  908469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/addons-054604/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 10:11:31.630613  908469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/addons-054604/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 10:11:31.647452  908469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 10:11:31.664201  908469 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 10:11:31.676848  908469 ssh_runner.go:195] Run: openssl version
	I1213 10:11:31.683356  908469 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:11:31.690968  908469 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 10:11:31.698663  908469 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:11:31.702391  908469 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 10:11 /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:11:31.702495  908469 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:11:31.744174  908469 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 10:11:31.751760  908469 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1213 10:11:31.759098  908469 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 10:11:31.763621  908469 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1213 10:11:31.763674  908469 kubeadm.go:401] StartCluster: {Name:addons-054604 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-054604 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:11:31.763753  908469 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 10:11:31.763822  908469 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 10:11:31.792258  908469 cri.go:89] found id: ""
	I1213 10:11:31.792334  908469 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 10:11:31.800076  908469 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 10:11:31.808153  908469 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 10:11:31.808261  908469 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 10:11:31.816142  908469 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 10:11:31.816212  908469 kubeadm.go:158] found existing configuration files:
	
	I1213 10:11:31.816286  908469 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 10:11:31.823718  908469 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 10:11:31.823809  908469 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 10:11:31.831461  908469 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 10:11:31.838595  908469 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 10:11:31.838681  908469 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 10:11:31.845992  908469 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 10:11:31.853656  908469 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 10:11:31.853763  908469 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 10:11:31.861091  908469 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 10:11:31.868712  908469 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 10:11:31.868786  908469 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 10:11:31.876136  908469 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 10:11:31.915793  908469 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1213 10:11:31.916321  908469 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 10:11:31.939613  908469 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 10:11:31.939951  908469 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 10:11:31.940079  908469 kubeadm.go:319] OS: Linux
	I1213 10:11:31.940147  908469 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 10:11:31.940204  908469 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 10:11:31.940257  908469 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 10:11:31.940316  908469 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 10:11:31.940367  908469 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 10:11:31.940434  908469 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 10:11:31.940526  908469 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 10:11:31.940601  908469 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 10:11:31.940685  908469 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 10:11:32.007330  908469 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 10:11:32.007450  908469 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 10:11:32.007551  908469 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 10:11:32.017500  908469 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 10:11:32.021744  908469 out.go:252]   - Generating certificates and keys ...
	I1213 10:11:32.021845  908469 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 10:11:32.021922  908469 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 10:11:33.570048  908469 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1213 10:11:33.938587  908469 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1213 10:11:34.308929  908469 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1213 10:11:35.272718  908469 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1213 10:11:35.701530  908469 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1213 10:11:35.701924  908469 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-054604 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1213 10:11:35.914605  908469 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1213 10:11:35.914967  908469 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-054604 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1213 10:11:36.272736  908469 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1213 10:11:36.872269  908469 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1213 10:11:37.216848  908469 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1213 10:11:37.217149  908469 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 10:11:37.756874  908469 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 10:11:37.856273  908469 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 10:11:38.137670  908469 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 10:11:38.363575  908469 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 10:11:38.839088  908469 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 10:11:38.839937  908469 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 10:11:38.842751  908469 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 10:11:38.846385  908469 out.go:252]   - Booting up control plane ...
	I1213 10:11:38.846507  908469 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 10:11:38.846597  908469 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 10:11:38.846673  908469 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 10:11:38.862616  908469 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 10:11:38.862990  908469 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 10:11:38.871185  908469 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 10:11:38.872157  908469 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 10:11:38.872575  908469 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 10:11:39.002283  908469 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 10:11:39.002406  908469 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 10:11:40.999191  908469 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 2.000924838s
	I1213 10:11:41.005826  908469 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1213 10:11:41.005925  908469 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1213 10:11:41.006015  908469 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1213 10:11:41.006094  908469 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1213 10:11:43.966494  908469 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.962017345s
	I1213 10:11:45.285788  908469 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.281867052s
	I1213 10:11:47.006223  908469 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.002240372s
	I1213 10:11:47.038114  908469 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1213 10:11:47.050877  908469 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1213 10:11:47.065871  908469 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1213 10:11:47.066082  908469 kubeadm.go:319] [mark-control-plane] Marking the node addons-054604 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1213 10:11:47.077388  908469 kubeadm.go:319] [bootstrap-token] Using token: bsvmhz.9ag4oa3ly42j26tf
	I1213 10:11:47.080331  908469 out.go:252]   - Configuring RBAC rules ...
	I1213 10:11:47.080456  908469 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1213 10:11:47.086431  908469 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1213 10:11:47.094289  908469 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1213 10:11:47.098261  908469 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1213 10:11:47.102478  908469 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1213 10:11:47.106907  908469 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1213 10:11:47.413629  908469 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1213 10:11:47.838952  908469 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1213 10:11:48.417792  908469 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1213 10:11:48.417812  908469 kubeadm.go:319] 
	I1213 10:11:48.417874  908469 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1213 10:11:48.417878  908469 kubeadm.go:319] 
	I1213 10:11:48.417955  908469 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1213 10:11:48.417959  908469 kubeadm.go:319] 
	I1213 10:11:48.417984  908469 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1213 10:11:48.418043  908469 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1213 10:11:48.418094  908469 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1213 10:11:48.418098  908469 kubeadm.go:319] 
	I1213 10:11:48.418152  908469 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1213 10:11:48.418157  908469 kubeadm.go:319] 
	I1213 10:11:48.418204  908469 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1213 10:11:48.418208  908469 kubeadm.go:319] 
	I1213 10:11:48.418260  908469 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1213 10:11:48.418336  908469 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1213 10:11:48.418408  908469 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1213 10:11:48.418412  908469 kubeadm.go:319] 
	I1213 10:11:48.418496  908469 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1213 10:11:48.418573  908469 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1213 10:11:48.418577  908469 kubeadm.go:319] 
	I1213 10:11:48.418660  908469 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token bsvmhz.9ag4oa3ly42j26tf \
	I1213 10:11:48.418763  908469 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:b3c7efe1ca5668711c134b6b98856894f548fd5af0cfb3bc5013f3facc637401 \
	I1213 10:11:48.418783  908469 kubeadm.go:319] 	--control-plane 
	I1213 10:11:48.418787  908469 kubeadm.go:319] 
	I1213 10:11:48.418871  908469 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1213 10:11:48.418875  908469 kubeadm.go:319] 
	I1213 10:11:48.418956  908469 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token bsvmhz.9ag4oa3ly42j26tf \
	I1213 10:11:48.419058  908469 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:b3c7efe1ca5668711c134b6b98856894f548fd5af0cfb3bc5013f3facc637401 
	I1213 10:11:48.422452  908469 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1213 10:11:48.422678  908469 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 10:11:48.422782  908469 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 10:11:48.422802  908469 cni.go:84] Creating CNI manager for ""
	I1213 10:11:48.422810  908469 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 10:11:48.425915  908469 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1213 10:11:48.428751  908469 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1213 10:11:48.432933  908469 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1213 10:11:48.432956  908469 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1213 10:11:48.447928  908469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1213 10:11:48.732302  908469 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1213 10:11:48.732394  908469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-054604 minikube.k8s.io/updated_at=2025_12_13T10_11_48_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=fb16b7642350f383695d44d1e88d7327f6f14453 minikube.k8s.io/name=addons-054604 minikube.k8s.io/primary=true
	I1213 10:11:48.732350  908469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 10:11:48.864582  908469 ops.go:34] apiserver oom_adj: -16
	I1213 10:11:48.864732  908469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 10:11:49.365271  908469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 10:11:49.865386  908469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 10:11:50.365645  908469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 10:11:50.865361  908469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 10:11:51.364696  908469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 10:11:51.865310  908469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 10:11:52.365402  908469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 10:11:52.865684  908469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 10:11:53.365240  908469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 10:11:53.499457  908469 kubeadm.go:1114] duration metric: took 4.767165787s to wait for elevateKubeSystemPrivileges
	I1213 10:11:53.499492  908469 kubeadm.go:403] duration metric: took 21.735822105s to StartCluster
	I1213 10:11:53.499510  908469 settings.go:142] acquiring lock: {Name:mk93988d167ba25bb331a8426f9b2f4ef25dd844 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:11:53.499625  908469 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22128-904040/kubeconfig
	I1213 10:11:53.499997  908469 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-904040/kubeconfig: {Name:mk623f80012ba74b924bdfcf4e2ec5178c2702f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:11:53.500237  908469 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 10:11:53.500379  908469 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1213 10:11:53.500629  908469 config.go:182] Loaded profile config "addons-054604": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 10:11:53.500670  908469 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1213 10:11:53.500748  908469 addons.go:70] Setting yakd=true in profile "addons-054604"
	I1213 10:11:53.500766  908469 addons.go:239] Setting addon yakd=true in "addons-054604"
	I1213 10:11:53.500794  908469 host.go:66] Checking if "addons-054604" exists ...
	I1213 10:11:53.501254  908469 cli_runner.go:164] Run: docker container inspect addons-054604 --format={{.State.Status}}
	I1213 10:11:53.501944  908469 addons.go:70] Setting metrics-server=true in profile "addons-054604"
	I1213 10:11:53.501961  908469 addons.go:70] Setting registry=true in profile "addons-054604"
	I1213 10:11:53.501971  908469 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-054604"
	I1213 10:11:53.501978  908469 addons.go:239] Setting addon registry=true in "addons-054604"
	I1213 10:11:53.501980  908469 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-054604"
	I1213 10:11:53.502007  908469 host.go:66] Checking if "addons-054604" exists ...
	I1213 10:11:53.502015  908469 addons.go:70] Setting cloud-spanner=true in profile "addons-054604"
	I1213 10:11:53.502028  908469 addons.go:239] Setting addon cloud-spanner=true in "addons-054604"
	I1213 10:11:53.502041  908469 host.go:66] Checking if "addons-054604" exists ...
	I1213 10:11:53.502432  908469 cli_runner.go:164] Run: docker container inspect addons-054604 --format={{.State.Status}}
	I1213 10:11:53.502449  908469 cli_runner.go:164] Run: docker container inspect addons-054604 --format={{.State.Status}}
	I1213 10:11:53.506773  908469 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-054604"
	I1213 10:11:53.506893  908469 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-054604"
	I1213 10:11:53.506953  908469 host.go:66] Checking if "addons-054604" exists ...
	I1213 10:11:53.502009  908469 host.go:66] Checking if "addons-054604" exists ...
	I1213 10:11:53.507484  908469 cli_runner.go:164] Run: docker container inspect addons-054604 --format={{.State.Status}}
	I1213 10:11:53.501951  908469 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-054604"
	I1213 10:11:53.513930  908469 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-054604"
	I1213 10:11:53.514004  908469 host.go:66] Checking if "addons-054604" exists ...
	I1213 10:11:53.501964  908469 addons.go:239] Setting addon metrics-server=true in "addons-054604"
	I1213 10:11:53.514950  908469 host.go:66] Checking if "addons-054604" exists ...
	I1213 10:11:53.515311  908469 cli_runner.go:164] Run: docker container inspect addons-054604 --format={{.State.Status}}
	I1213 10:11:53.514071  908469 addons.go:70] Setting default-storageclass=true in profile "addons-054604"
	I1213 10:11:53.515547  908469 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-054604"
	I1213 10:11:53.515812  908469 cli_runner.go:164] Run: docker container inspect addons-054604 --format={{.State.Status}}
	I1213 10:11:53.514080  908469 addons.go:70] Setting gcp-auth=true in profile "addons-054604"
	I1213 10:11:53.520826  908469 mustload.go:66] Loading cluster: addons-054604
	I1213 10:11:53.521090  908469 config.go:182] Loaded profile config "addons-054604": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 10:11:53.523619  908469 cli_runner.go:164] Run: docker container inspect addons-054604 --format={{.State.Status}}
	I1213 10:11:53.514080  908469 addons.go:70] Setting registry-creds=true in profile "addons-054604"
	I1213 10:11:53.514087  908469 addons.go:70] Setting ingress=true in profile "addons-054604"
	I1213 10:11:53.514093  908469 addons.go:70] Setting storage-provisioner=true in profile "addons-054604"
	I1213 10:11:53.527129  908469 addons.go:239] Setting addon storage-provisioner=true in "addons-054604"
	I1213 10:11:53.514101  908469 addons.go:70] Setting ingress-dns=true in profile "addons-054604"
	I1213 10:11:53.527202  908469 addons.go:239] Setting addon ingress-dns=true in "addons-054604"
	I1213 10:11:53.514101  908469 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-054604"
	I1213 10:11:53.527288  908469 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-054604"
	I1213 10:11:53.514107  908469 addons.go:70] Setting inspektor-gadget=true in profile "addons-054604"
	I1213 10:11:53.527373  908469 addons.go:239] Setting addon inspektor-gadget=true in "addons-054604"
	I1213 10:11:53.514108  908469 addons.go:70] Setting volcano=true in profile "addons-054604"
	I1213 10:11:53.527441  908469 addons.go:239] Setting addon volcano=true in "addons-054604"
	I1213 10:11:53.514114  908469 addons.go:70] Setting volumesnapshots=true in profile "addons-054604"
	I1213 10:11:53.527520  908469 addons.go:239] Setting addon volumesnapshots=true in "addons-054604"
	I1213 10:11:53.514492  908469 out.go:179] * Verifying Kubernetes components...
	I1213 10:11:53.514517  908469 cli_runner.go:164] Run: docker container inspect addons-054604 --format={{.State.Status}}
	I1213 10:11:53.514926  908469 cli_runner.go:164] Run: docker container inspect addons-054604 --format={{.State.Status}}
	I1213 10:11:53.549481  908469 host.go:66] Checking if "addons-054604" exists ...
	I1213 10:11:53.550121  908469 cli_runner.go:164] Run: docker container inspect addons-054604 --format={{.State.Status}}
	I1213 10:11:53.557672  908469 addons.go:239] Setting addon registry-creds=true in "addons-054604"
	I1213 10:11:53.557761  908469 host.go:66] Checking if "addons-054604" exists ...
	I1213 10:11:53.558356  908469 cli_runner.go:164] Run: docker container inspect addons-054604 --format={{.State.Status}}
	I1213 10:11:53.549497  908469 host.go:66] Checking if "addons-054604" exists ...
	I1213 10:11:53.549500  908469 host.go:66] Checking if "addons-054604" exists ...
	I1213 10:11:53.527064  908469 addons.go:239] Setting addon ingress=true in "addons-054604"
	I1213 10:11:53.588747  908469 host.go:66] Checking if "addons-054604" exists ...
	I1213 10:11:53.594461  908469 cli_runner.go:164] Run: docker container inspect addons-054604 --format={{.State.Status}}
	I1213 10:11:53.598165  908469 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:11:53.635515  908469 host.go:66] Checking if "addons-054604" exists ...
	I1213 10:11:53.636045  908469 cli_runner.go:164] Run: docker container inspect addons-054604 --format={{.State.Status}}
	I1213 10:11:53.658583  908469 host.go:66] Checking if "addons-054604" exists ...
	I1213 10:11:53.659118  908469 cli_runner.go:164] Run: docker container inspect addons-054604 --format={{.State.Status}}
	I1213 10:11:53.662037  908469 cli_runner.go:164] Run: docker container inspect addons-054604 --format={{.State.Status}}
	I1213 10:11:53.675320  908469 cli_runner.go:164] Run: docker container inspect addons-054604 --format={{.State.Status}}
	I1213 10:11:53.686224  908469 cli_runner.go:164] Run: docker container inspect addons-054604 --format={{.State.Status}}
	I1213 10:11:53.715435  908469 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.6
	I1213 10:11:53.743245  908469 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1213 10:11:53.750123  908469 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1213 10:11:53.753696  908469 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1213 10:11:53.757785  908469 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1213 10:11:53.757955  908469 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1213 10:11:53.758989  908469 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1213 10:11:53.785784  908469 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1213 10:11:53.785825  908469 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1213 10:11:53.785945  908469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054604
	I1213 10:11:53.793993  908469 host.go:66] Checking if "addons-054604" exists ...
	I1213 10:11:53.795921  908469 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1213 10:11:53.795947  908469 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1213 10:11:53.796006  908469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054604
	I1213 10:11:53.780882  908469 addons.go:239] Setting addon default-storageclass=true in "addons-054604"
	I1213 10:11:53.802121  908469 host.go:66] Checking if "addons-054604" exists ...
	I1213 10:11:53.802883  908469 cli_runner.go:164] Run: docker container inspect addons-054604 --format={{.State.Status}}
	I1213 10:11:53.808875  908469 out.go:179]   - Using image docker.io/registry:3.0.0
	I1213 10:11:53.812214  908469 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1213 10:11:53.812332  908469 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1213 10:11:53.812371  908469 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1213 10:11:53.812464  908469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054604
	I1213 10:11:53.780926  908469 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1213 10:11:53.816210  908469 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1213 10:11:53.816288  908469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054604
	I1213 10:11:53.816489  908469 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1213 10:11:53.816493  908469 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1213 10:11:53.874901  908469 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1213 10:11:53.876345  908469 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.47.0
	I1213 10:11:53.876736  908469 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1213 10:11:53.893928  908469 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1213 10:11:53.901094  908469 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1213 10:11:53.901166  908469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054604
	I1213 10:11:53.901669  908469 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1213 10:11:53.902801  908469 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 10:11:53.903061  908469 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.1
	I1213 10:11:53.903144  908469 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1213 10:11:53.906528  908469 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1213 10:11:53.905692  908469 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1213 10:11:53.908400  908469 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1213 10:11:53.905708  908469 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1213 10:11:53.908548  908469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054604
	I1213 10:11:53.906639  908469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054604
	I1213 10:11:53.910041  908469 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:11:53.910058  908469 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 10:11:53.910126  908469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054604
	I1213 10:11:53.911511  908469 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-054604"
	I1213 10:11:53.911575  908469 host.go:66] Checking if "addons-054604" exists ...
	I1213 10:11:53.912008  908469 cli_runner.go:164] Run: docker container inspect addons-054604 --format={{.State.Status}}
	W1213 10:11:53.912830  908469 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1213 10:11:53.916979  908469 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1213 10:11:53.934618  908469 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1213 10:11:53.934709  908469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054604
	I1213 10:11:53.916999  908469 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1213 10:11:53.937928  908469 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1213 10:11:53.938038  908469 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1213 10:11:53.939825  908469 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1213 10:11:53.940700  908469 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1213 10:11:53.940726  908469 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1213 10:11:53.940805  908469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054604
	I1213 10:11:53.972539  908469 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1213 10:11:53.979102  908469 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1213 10:11:53.982154  908469 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1213 10:11:53.982183  908469 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1213 10:11:53.982261  908469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054604
	I1213 10:11:53.990818  908469 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1213 10:11:53.990838  908469 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1213 10:11:53.990900  908469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054604
	I1213 10:11:54.011197  908469 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1213 10:11:54.011223  908469 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1213 10:11:54.011292  908469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054604
	I1213 10:11:54.057711  908469 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/addons-054604/id_rsa Username:docker}
	I1213 10:11:54.058329  908469 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/addons-054604/id_rsa Username:docker}
	I1213 10:11:54.103169  908469 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 10:11:54.103203  908469 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 10:11:54.103269  908469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054604
	I1213 10:11:54.151759  908469 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/addons-054604/id_rsa Username:docker}
	I1213 10:11:54.182947  908469 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/addons-054604/id_rsa Username:docker}
	I1213 10:11:54.194559  908469 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/addons-054604/id_rsa Username:docker}
	I1213 10:11:54.201131  908469 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1213 10:11:54.204927  908469 out.go:179]   - Using image docker.io/busybox:stable
	I1213 10:11:54.211092  908469 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1213 10:11:54.211118  908469 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1213 10:11:54.211185  908469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054604
	I1213 10:11:54.217708  908469 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/addons-054604/id_rsa Username:docker}
	I1213 10:11:54.230331  908469 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/addons-054604/id_rsa Username:docker}
	I1213 10:11:54.237877  908469 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/addons-054604/id_rsa Username:docker}
	I1213 10:11:54.249654  908469 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/addons-054604/id_rsa Username:docker}
	I1213 10:11:54.251180  908469 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/addons-054604/id_rsa Username:docker}
	I1213 10:11:54.276041  908469 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/addons-054604/id_rsa Username:docker}
	W1213 10:11:54.279548  908469 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1213 10:11:54.279590  908469 retry.go:31] will retry after 167.591346ms: ssh: handshake failed: EOF
	I1213 10:11:54.288166  908469 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/addons-054604/id_rsa Username:docker}
	I1213 10:11:54.290206  908469 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/addons-054604/id_rsa Username:docker}
	I1213 10:11:54.292899  908469 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/addons-054604/id_rsa Username:docker}
	W1213 10:11:54.294010  908469 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1213 10:11:54.294031  908469 retry.go:31] will retry after 339.894686ms: ssh: handshake failed: EOF
	I1213 10:11:54.303633  908469 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 10:11:54.313366  908469 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/addons-054604/id_rsa Username:docker}
	I1213 10:11:54.652585  908469 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1213 10:11:54.726593  908469 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1213 10:11:54.807003  908469 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1213 10:11:54.811902  908469 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:11:54.844630  908469 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:11:54.856012  908469 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1213 10:11:54.874064  908469 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1213 10:11:54.874089  908469 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1213 10:11:54.894129  908469 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1213 10:11:54.894159  908469 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1213 10:11:54.984896  908469 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1213 10:11:55.003069  908469 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1213 10:11:55.004628  908469 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1213 10:11:55.010692  908469 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1213 10:11:55.010721  908469 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1213 10:11:55.092446  908469 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1213 10:11:55.092473  908469 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1213 10:11:55.109595  908469 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1213 10:11:55.119094  908469 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.243354014s)
	I1213 10:11:55.119125  908469 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1213 10:11:55.120742  908469 node_ready.go:35] waiting up to 6m0s for node "addons-054604" to be "Ready" ...
	I1213 10:11:55.125184  908469 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1213 10:11:55.125212  908469 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1213 10:11:55.170807  908469 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1213 10:11:55.170887  908469 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1213 10:11:55.193981  908469 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1213 10:11:55.194052  908469 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1213 10:11:55.346404  908469 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1213 10:11:55.346479  908469 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1213 10:11:55.347699  908469 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1213 10:11:55.347756  908469 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1213 10:11:55.394304  908469 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1213 10:11:55.394382  908469 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1213 10:11:55.394723  908469 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1213 10:11:55.394773  908469 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1213 10:11:55.439263  908469 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1213 10:11:55.455215  908469 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1213 10:11:55.574344  908469 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1213 10:11:55.574369  908469 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1213 10:11:55.628738  908469 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1213 10:11:55.628766  908469 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1213 10:11:55.668071  908469 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-054604" context rescaled to 1 replicas
	I1213 10:11:55.680325  908469 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1213 10:11:55.680351  908469 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1213 10:11:55.731506  908469 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1213 10:11:55.896105  908469 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1213 10:11:55.896132  908469 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1213 10:11:55.959774  908469 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1213 10:11:55.959800  908469 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1213 10:11:56.063615  908469 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1213 10:11:56.063642  908469 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1213 10:11:56.151865  908469 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1213 10:11:56.151891  908469 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1213 10:11:56.259772  908469 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1213 10:11:56.314803  908469 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1213 10:11:56.314881  908469 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1213 10:11:56.463523  908469 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1213 10:11:56.463594  908469 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1213 10:11:56.607433  908469 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1213 10:11:56.607508  908469 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1213 10:11:56.721935  908469 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (1.995258102s)
	I1213 10:11:56.854728  908469 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1213 10:11:56.854751  908469 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1213 10:11:56.925209  908469 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1213 10:11:56.925237  908469 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1213 10:11:57.029836  908469 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1213 10:11:57.029915  908469 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	W1213 10:11:57.124173  908469 node_ready.go:57] node "addons-054604" has "Ready":"False" status (will retry)
	I1213 10:11:57.134210  908469 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	W1213 10:11:59.127769  908469 node_ready.go:57] node "addons-054604" has "Ready":"False" status (will retry)
	I1213 10:11:59.530707  908469 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.723670593s)
	I1213 10:11:59.530743  908469 addons.go:495] Verifying addon ingress=true in "addons-054604"
	I1213 10:11:59.530932  908469 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.719006406s)
	I1213 10:11:59.531171  908469 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.6865146s)
	I1213 10:11:59.531230  908469 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.6751864s)
	I1213 10:11:59.531325  908469 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (4.546389174s)
	I1213 10:11:59.531362  908469 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (4.526708489s)
	I1213 10:11:59.531395  908469 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.528302149s)
	I1213 10:11:59.531447  908469 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.42182856s)
	I1213 10:11:59.531557  908469 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.092267641s)
	I1213 10:11:59.531573  908469 addons.go:495] Verifying addon registry=true in "addons-054604"
	I1213 10:11:59.531955  908469 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.076709114s)
	I1213 10:11:59.531980  908469 addons.go:495] Verifying addon metrics-server=true in "addons-054604"
	I1213 10:11:59.532016  908469 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (3.800482768s)
	I1213 10:11:59.534256  908469 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-054604 service yakd-dashboard -n yakd-dashboard
	
	I1213 10:11:59.534339  908469 out.go:179] * Verifying ingress addon...
	I1213 10:11:59.534360  908469 out.go:179] * Verifying registry addon...
	I1213 10:11:59.539347  908469 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1213 10:11:59.540017  908469 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1213 10:11:59.562304  908469 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1213 10:11:59.562372  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:11:59.562886  908469 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1213 10:11:59.562908  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1213 10:11:59.594838  908469 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1213 10:11:59.669477  908469 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.409621169s)
	W1213 10:11:59.669614  908469 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1213 10:11:59.669653  908469 retry.go:31] will retry after 202.635343ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1213 10:11:59.862739  908469 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.728429394s)
	I1213 10:11:59.862777  908469 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-054604"
	I1213 10:11:59.866013  908469 out.go:179] * Verifying csi-hostpath-driver addon...
	I1213 10:11:59.869494  908469 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1213 10:11:59.873297  908469 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1213 10:11:59.888298  908469 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1213 10:11:59.888333  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:00.071553  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:00.073196  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:00.432921  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:00.545651  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:00.545801  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:00.873119  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:01.044003  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:01.044467  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:01.373496  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:01.404048  908469 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1213 10:12:01.404151  908469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054604
	I1213 10:12:01.420397  908469 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/addons-054604/id_rsa Username:docker}
	I1213 10:12:01.530541  908469 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1213 10:12:01.544980  908469 addons.go:239] Setting addon gcp-auth=true in "addons-054604"
	I1213 10:12:01.545027  908469 host.go:66] Checking if "addons-054604" exists ...
	I1213 10:12:01.545514  908469 cli_runner.go:164] Run: docker container inspect addons-054604 --format={{.State.Status}}
	I1213 10:12:01.547401  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:01.547804  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:01.563658  908469 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1213 10:12:01.563713  908469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054604
	I1213 10:12:01.583397  908469 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/addons-054604/id_rsa Username:docker}
	W1213 10:12:01.624335  908469 node_ready.go:57] node "addons-054604" has "Ready":"False" status (will retry)
	I1213 10:12:01.872961  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:02.043105  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:02.044446  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:02.372228  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:02.543739  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:02.543925  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:02.873255  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:03.045206  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:03.046256  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:03.110037  908469 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.546349139s)
	I1213 10:12:03.110342  908469 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.236963938s)
	I1213 10:12:03.113309  908469 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1213 10:12:03.116083  908469 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1213 10:12:03.118934  908469 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1213 10:12:03.118965  908469 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1213 10:12:03.133428  908469 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1213 10:12:03.133454  908469 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1213 10:12:03.148544  908469 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1213 10:12:03.148569  908469 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1213 10:12:03.162470  908469 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1213 10:12:03.372519  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:03.552311  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:03.553301  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1213 10:12:03.628582  908469 node_ready.go:57] node "addons-054604" has "Ready":"False" status (will retry)
	I1213 10:12:03.677675  908469 addons.go:495] Verifying addon gcp-auth=true in "addons-054604"
	I1213 10:12:03.681038  908469 out.go:179] * Verifying gcp-auth addon...
	I1213 10:12:03.684243  908469 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1213 10:12:03.687766  908469 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1213 10:12:03.687785  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:03.873056  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:04.042997  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:04.043159  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:04.187917  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:04.372950  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:04.544134  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:04.544383  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:04.687115  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:04.873891  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:05.044177  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:05.044543  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:05.187465  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:05.373278  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:05.543376  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:05.543656  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:05.687581  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:05.873053  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:06.043231  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:06.044558  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1213 10:12:06.124232  908469 node_ready.go:57] node "addons-054604" has "Ready":"False" status (will retry)
	I1213 10:12:06.187138  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:06.372951  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:06.543704  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:06.544094  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:06.687693  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:06.872698  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:07.043983  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:07.044197  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:07.188007  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:07.372928  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:07.543461  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:07.543929  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:07.687076  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:07.872808  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:08.044421  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:08.045356  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:08.188115  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:08.372938  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:08.544447  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:08.544534  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1213 10:12:08.624451  908469 node_ready.go:57] node "addons-054604" has "Ready":"False" status (will retry)
	I1213 10:12:08.687294  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:08.873117  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:09.043366  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:09.043699  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:09.187357  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:09.372224  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:09.543169  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:09.543225  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:09.687723  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:09.872530  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:10.043743  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:10.044670  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:10.187920  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:10.372808  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:10.543979  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:10.544669  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:10.687354  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:10.873617  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:11.044212  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:11.044283  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1213 10:12:11.124050  908469 node_ready.go:57] node "addons-054604" has "Ready":"False" status (will retry)
	I1213 10:12:11.188068  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:11.372854  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:11.543986  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:11.544081  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:11.687656  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:11.872699  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:12.043916  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:12.044148  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:12.187721  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:12.373092  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:12.544687  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:12.544867  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:12.687862  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:12.875242  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:13.043464  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:13.043658  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:13.187305  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:13.373242  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:13.543847  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:13.544061  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1213 10:12:13.624106  908469 node_ready.go:57] node "addons-054604" has "Ready":"False" status (will retry)
	I1213 10:12:13.688114  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:13.873084  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:14.043254  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:14.043413  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:14.187831  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:14.373250  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:14.543891  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:14.544132  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:14.688350  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:14.873624  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:15.047340  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:15.047522  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:15.187482  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:15.372612  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:15.544103  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:15.544501  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:15.687998  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:15.873071  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:16.043567  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:16.044064  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1213 10:12:16.123500  908469 node_ready.go:57] node "addons-054604" has "Ready":"False" status (will retry)
	I1213 10:12:16.187712  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:16.372951  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:16.547265  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:16.547500  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:16.687193  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:16.873606  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:17.043788  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:17.043853  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:17.187350  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:17.373779  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:17.544935  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:17.545024  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:17.687709  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:17.872457  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:18.044066  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:18.044267  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1213 10:12:18.124150  908469 node_ready.go:57] node "addons-054604" has "Ready":"False" status (will retry)
	I1213 10:12:18.188145  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:18.373219  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:18.543853  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:18.544113  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:18.687943  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:18.873103  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:19.043098  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:19.043877  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:19.187357  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:19.373257  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:19.543589  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:19.543706  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:19.687285  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:19.873342  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:20.043598  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:20.043870  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:20.189297  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:20.373575  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:20.543676  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:20.544339  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1213 10:12:20.624627  908469 node_ready.go:57] node "addons-054604" has "Ready":"False" status (will retry)
	I1213 10:12:20.687252  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:20.873407  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:21.044128  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:21.044127  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:21.188022  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:21.372957  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:21.543405  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:21.543896  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:21.687602  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:21.872328  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:22.044581  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:22.046010  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:22.187703  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:22.372451  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:22.544808  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:22.545516  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:22.687022  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:22.873349  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:23.043739  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:23.043856  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1213 10:12:23.123985  908469 node_ready.go:57] node "addons-054604" has "Ready":"False" status (will retry)
	I1213 10:12:23.187697  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:23.373339  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:23.543883  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:23.544270  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:23.688094  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:23.872939  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:24.043381  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:24.043619  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:24.187719  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:24.372499  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:24.543521  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:24.543782  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:24.687715  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:24.872813  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:25.049744  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:25.050049  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1213 10:12:25.124237  908469 node_ready.go:57] node "addons-054604" has "Ready":"False" status (will retry)
	I1213 10:12:25.187062  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:25.372973  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:25.543173  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:25.543453  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:25.687131  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:25.873345  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:26.043610  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:26.043763  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:26.187787  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:26.372595  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:26.544096  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:26.544370  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:26.687881  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:26.873409  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:27.043857  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:27.044127  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:27.187711  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:27.372817  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:27.543389  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:27.543829  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1213 10:12:27.623499  908469 node_ready.go:57] node "addons-054604" has "Ready":"False" status (will retry)
	I1213 10:12:27.687350  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:27.873012  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:28.043668  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:28.043816  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:28.187720  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:28.372552  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:28.543910  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:28.544067  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:28.687725  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:28.873313  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:29.043391  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:29.043622  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:29.187328  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:29.372536  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:29.543501  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:29.543851  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:29.687098  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:29.873040  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:30.045191  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:30.045420  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1213 10:12:30.124965  908469 node_ready.go:57] node "addons-054604" has "Ready":"False" status (will retry)
	I1213 10:12:30.188152  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:30.372847  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:30.543447  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:30.543503  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:30.687855  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:30.872748  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:31.043952  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:31.044209  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:31.187212  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:31.373232  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:31.544357  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:31.544629  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:31.687294  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:31.872168  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:32.043698  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:32.043803  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:32.187496  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:32.372142  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:32.543347  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:32.543576  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1213 10:12:32.624582  908469 node_ready.go:57] node "addons-054604" has "Ready":"False" status (will retry)
	I1213 10:12:32.687612  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:32.872326  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:33.043508  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:33.043919  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:33.187274  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:33.373039  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:33.544151  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:33.544314  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:33.688207  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:33.872423  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:34.043510  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:34.043707  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:34.187538  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:34.397719  908469 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1213 10:12:34.397793  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:34.590625  908469 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1213 10:12:34.590780  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:34.590864  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:34.680701  908469 node_ready.go:49] node "addons-054604" is "Ready"
	I1213 10:12:34.680793  908469 node_ready.go:38] duration metric: took 39.560017143s for node "addons-054604" to be "Ready" ...
	I1213 10:12:34.680827  908469 api_server.go:52] waiting for apiserver process to appear ...
	I1213 10:12:34.680998  908469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:12:34.716229  908469 api_server.go:72] duration metric: took 41.215954564s to wait for apiserver process to appear ...
	I1213 10:12:34.716269  908469 api_server.go:88] waiting for apiserver healthz status ...
	I1213 10:12:34.716307  908469 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1213 10:12:34.718346  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:34.730648  908469 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1213 10:12:34.731863  908469 api_server.go:141] control plane version: v1.34.2
	I1213 10:12:34.731887  908469 api_server.go:131] duration metric: took 15.609579ms to wait for apiserver health ...
	I1213 10:12:34.731896  908469 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 10:12:34.750910  908469 system_pods.go:59] 19 kube-system pods found
	I1213 10:12:34.750949  908469 system_pods.go:61] "coredns-66bc5c9577-t662h" [ed1b0e90-ee52-4fca-af1a-1a6ebe350efa] Pending
	I1213 10:12:34.750983  908469 system_pods.go:61] "csi-hostpath-attacher-0" [a46ff09e-da25-4c79-9691-00e866c026a9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1213 10:12:34.750995  908469 system_pods.go:61] "csi-hostpath-resizer-0" [f308e519-8eb9-4ed3-b00e-bea81357ccf2] Pending
	I1213 10:12:34.751001  908469 system_pods.go:61] "csi-hostpathplugin-8fv49" [853a35ff-28df-45cb-b34b-fa2eac6bce76] Pending
	I1213 10:12:34.751007  908469 system_pods.go:61] "etcd-addons-054604" [af24a122-e2ef-4b2a-8b9f-cc98cee3c494] Running
	I1213 10:12:34.751018  908469 system_pods.go:61] "kindnet-wx4r9" [a806d7b5-a124-4837-97ec-c315ca041ed7] Running
	I1213 10:12:34.751022  908469 system_pods.go:61] "kube-apiserver-addons-054604" [c4bb91c6-22e8-4695-bc48-51cfe3d18458] Running
	I1213 10:12:34.751026  908469 system_pods.go:61] "kube-controller-manager-addons-054604" [5313760b-2899-4b00-8740-c88adbdc9b1b] Running
	I1213 10:12:34.751031  908469 system_pods.go:61] "kube-ingress-dns-minikube" [cd037993-329b-40e5-ad1b-458335cb925e] Pending
	I1213 10:12:34.751034  908469 system_pods.go:61] "kube-proxy-hp7zc" [2d246c30-0c4f-426d-956d-1b053698d54f] Running
	I1213 10:12:34.751038  908469 system_pods.go:61] "kube-scheduler-addons-054604" [28260261-42b5-4215-a767-25d752dc219c] Running
	I1213 10:12:34.751062  908469 system_pods.go:61] "metrics-server-85b7d694d7-2ppdp" [55d8b817-f36a-4527-b64b-aabcc328810b] Pending
	I1213 10:12:34.751068  908469 system_pods.go:61] "nvidia-device-plugin-daemonset-gzjcp" [b3e1a7fd-9954-4567-821c-410525dd004c] Pending
	I1213 10:12:34.751084  908469 system_pods.go:61] "registry-6b586f9694-4bxkh" [74643ad6-13cc-45ef-ad16-f7ecd0873ff9] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1213 10:12:34.751098  908469 system_pods.go:61] "registry-creds-764b6fb674-2htf4" [29d9b0f9-2ffa-4a0e-86e7-7a0f5b8da4a9] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1213 10:12:34.751103  908469 system_pods.go:61] "registry-proxy-xclch" [d5e8dae3-581e-4d96-b092-ed60f94f3d00] Pending
	I1213 10:12:34.751108  908469 system_pods.go:61] "snapshot-controller-7d9fbc56b8-8tp2d" [4159ab23-1000-4f10-8edc-ea73af07f77d] Pending
	I1213 10:12:34.751112  908469 system_pods.go:61] "snapshot-controller-7d9fbc56b8-bbmhp" [88ffaba8-cffd-4b27-ab19-843f22b84185] Pending
	I1213 10:12:34.751122  908469 system_pods.go:61] "storage-provisioner" [4c794042-57f4-49aa-8f64-71725002278e] Pending
	I1213 10:12:34.751128  908469 system_pods.go:74] duration metric: took 19.225573ms to wait for pod list to return data ...
	I1213 10:12:34.751135  908469 default_sa.go:34] waiting for default service account to be created ...
	I1213 10:12:34.754412  908469 default_sa.go:45] found service account: "default"
	I1213 10:12:34.754446  908469 default_sa.go:55] duration metric: took 3.27761ms for default service account to be created ...
	I1213 10:12:34.754456  908469 system_pods.go:116] waiting for k8s-apps to be running ...
	I1213 10:12:34.787447  908469 system_pods.go:86] 19 kube-system pods found
	I1213 10:12:34.787501  908469 system_pods.go:89] "coredns-66bc5c9577-t662h" [ed1b0e90-ee52-4fca-af1a-1a6ebe350efa] Pending
	I1213 10:12:34.787510  908469 system_pods.go:89] "csi-hostpath-attacher-0" [a46ff09e-da25-4c79-9691-00e866c026a9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1213 10:12:34.787515  908469 system_pods.go:89] "csi-hostpath-resizer-0" [f308e519-8eb9-4ed3-b00e-bea81357ccf2] Pending
	I1213 10:12:34.787520  908469 system_pods.go:89] "csi-hostpathplugin-8fv49" [853a35ff-28df-45cb-b34b-fa2eac6bce76] Pending
	I1213 10:12:34.787563  908469 system_pods.go:89] "etcd-addons-054604" [af24a122-e2ef-4b2a-8b9f-cc98cee3c494] Running
	I1213 10:12:34.787568  908469 system_pods.go:89] "kindnet-wx4r9" [a806d7b5-a124-4837-97ec-c315ca041ed7] Running
	I1213 10:12:34.787579  908469 system_pods.go:89] "kube-apiserver-addons-054604" [c4bb91c6-22e8-4695-bc48-51cfe3d18458] Running
	I1213 10:12:34.787583  908469 system_pods.go:89] "kube-controller-manager-addons-054604" [5313760b-2899-4b00-8740-c88adbdc9b1b] Running
	I1213 10:12:34.787587  908469 system_pods.go:89] "kube-ingress-dns-minikube" [cd037993-329b-40e5-ad1b-458335cb925e] Pending
	I1213 10:12:34.787592  908469 system_pods.go:89] "kube-proxy-hp7zc" [2d246c30-0c4f-426d-956d-1b053698d54f] Running
	I1213 10:12:34.787596  908469 system_pods.go:89] "kube-scheduler-addons-054604" [28260261-42b5-4215-a767-25d752dc219c] Running
	I1213 10:12:34.787609  908469 system_pods.go:89] "metrics-server-85b7d694d7-2ppdp" [55d8b817-f36a-4527-b64b-aabcc328810b] Pending
	I1213 10:12:34.787613  908469 system_pods.go:89] "nvidia-device-plugin-daemonset-gzjcp" [b3e1a7fd-9954-4567-821c-410525dd004c] Pending
	I1213 10:12:34.787660  908469 system_pods.go:89] "registry-6b586f9694-4bxkh" [74643ad6-13cc-45ef-ad16-f7ecd0873ff9] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1213 10:12:34.787678  908469 system_pods.go:89] "registry-creds-764b6fb674-2htf4" [29d9b0f9-2ffa-4a0e-86e7-7a0f5b8da4a9] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1213 10:12:34.787683  908469 system_pods.go:89] "registry-proxy-xclch" [d5e8dae3-581e-4d96-b092-ed60f94f3d00] Pending
	I1213 10:12:34.787687  908469 system_pods.go:89] "snapshot-controller-7d9fbc56b8-8tp2d" [4159ab23-1000-4f10-8edc-ea73af07f77d] Pending
	I1213 10:12:34.787690  908469 system_pods.go:89] "snapshot-controller-7d9fbc56b8-bbmhp" [88ffaba8-cffd-4b27-ab19-843f22b84185] Pending
	I1213 10:12:34.787693  908469 system_pods.go:89] "storage-provisioner" [4c794042-57f4-49aa-8f64-71725002278e] Pending
	I1213 10:12:34.787720  908469 retry.go:31] will retry after 214.288949ms: missing components: kube-dns
	I1213 10:12:34.883223  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:35.027498  908469 system_pods.go:86] 19 kube-system pods found
	I1213 10:12:35.027555  908469 system_pods.go:89] "coredns-66bc5c9577-t662h" [ed1b0e90-ee52-4fca-af1a-1a6ebe350efa] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 10:12:35.027565  908469 system_pods.go:89] "csi-hostpath-attacher-0" [a46ff09e-da25-4c79-9691-00e866c026a9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1213 10:12:35.027571  908469 system_pods.go:89] "csi-hostpath-resizer-0" [f308e519-8eb9-4ed3-b00e-bea81357ccf2] Pending
	I1213 10:12:35.027575  908469 system_pods.go:89] "csi-hostpathplugin-8fv49" [853a35ff-28df-45cb-b34b-fa2eac6bce76] Pending
	I1213 10:12:35.027579  908469 system_pods.go:89] "etcd-addons-054604" [af24a122-e2ef-4b2a-8b9f-cc98cee3c494] Running
	I1213 10:12:35.027585  908469 system_pods.go:89] "kindnet-wx4r9" [a806d7b5-a124-4837-97ec-c315ca041ed7] Running
	I1213 10:12:35.027613  908469 system_pods.go:89] "kube-apiserver-addons-054604" [c4bb91c6-22e8-4695-bc48-51cfe3d18458] Running
	I1213 10:12:35.027626  908469 system_pods.go:89] "kube-controller-manager-addons-054604" [5313760b-2899-4b00-8740-c88adbdc9b1b] Running
	I1213 10:12:35.027633  908469 system_pods.go:89] "kube-ingress-dns-minikube" [cd037993-329b-40e5-ad1b-458335cb925e] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1213 10:12:35.027637  908469 system_pods.go:89] "kube-proxy-hp7zc" [2d246c30-0c4f-426d-956d-1b053698d54f] Running
	I1213 10:12:35.027642  908469 system_pods.go:89] "kube-scheduler-addons-054604" [28260261-42b5-4215-a767-25d752dc219c] Running
	I1213 10:12:35.027656  908469 system_pods.go:89] "metrics-server-85b7d694d7-2ppdp" [55d8b817-f36a-4527-b64b-aabcc328810b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 10:12:35.027660  908469 system_pods.go:89] "nvidia-device-plugin-daemonset-gzjcp" [b3e1a7fd-9954-4567-821c-410525dd004c] Pending
	I1213 10:12:35.027666  908469 system_pods.go:89] "registry-6b586f9694-4bxkh" [74643ad6-13cc-45ef-ad16-f7ecd0873ff9] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1213 10:12:35.027691  908469 system_pods.go:89] "registry-creds-764b6fb674-2htf4" [29d9b0f9-2ffa-4a0e-86e7-7a0f5b8da4a9] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1213 10:12:35.027709  908469 system_pods.go:89] "registry-proxy-xclch" [d5e8dae3-581e-4d96-b092-ed60f94f3d00] Pending
	I1213 10:12:35.027720  908469 system_pods.go:89] "snapshot-controller-7d9fbc56b8-8tp2d" [4159ab23-1000-4f10-8edc-ea73af07f77d] Pending
	I1213 10:12:35.027728  908469 system_pods.go:89] "snapshot-controller-7d9fbc56b8-bbmhp" [88ffaba8-cffd-4b27-ab19-843f22b84185] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1213 10:12:35.027732  908469 system_pods.go:89] "storage-provisioner" [4c794042-57f4-49aa-8f64-71725002278e] Pending
	I1213 10:12:35.027755  908469 retry.go:31] will retry after 285.292541ms: missing components: kube-dns
	I1213 10:12:35.056284  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:35.056922  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:35.197979  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:35.318926  908469 system_pods.go:86] 19 kube-system pods found
	I1213 10:12:35.318972  908469 system_pods.go:89] "coredns-66bc5c9577-t662h" [ed1b0e90-ee52-4fca-af1a-1a6ebe350efa] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 10:12:35.318988  908469 system_pods.go:89] "csi-hostpath-attacher-0" [a46ff09e-da25-4c79-9691-00e866c026a9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1213 10:12:35.320717  908469 system_pods.go:89] "csi-hostpath-resizer-0" [f308e519-8eb9-4ed3-b00e-bea81357ccf2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1213 10:12:35.320748  908469 system_pods.go:89] "csi-hostpathplugin-8fv49" [853a35ff-28df-45cb-b34b-fa2eac6bce76] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1213 10:12:35.320787  908469 system_pods.go:89] "etcd-addons-054604" [af24a122-e2ef-4b2a-8b9f-cc98cee3c494] Running
	I1213 10:12:35.320802  908469 system_pods.go:89] "kindnet-wx4r9" [a806d7b5-a124-4837-97ec-c315ca041ed7] Running
	I1213 10:12:35.320807  908469 system_pods.go:89] "kube-apiserver-addons-054604" [c4bb91c6-22e8-4695-bc48-51cfe3d18458] Running
	I1213 10:12:35.320829  908469 system_pods.go:89] "kube-controller-manager-addons-054604" [5313760b-2899-4b00-8740-c88adbdc9b1b] Running
	I1213 10:12:35.320843  908469 system_pods.go:89] "kube-ingress-dns-minikube" [cd037993-329b-40e5-ad1b-458335cb925e] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1213 10:12:35.320852  908469 system_pods.go:89] "kube-proxy-hp7zc" [2d246c30-0c4f-426d-956d-1b053698d54f] Running
	I1213 10:12:35.320858  908469 system_pods.go:89] "kube-scheduler-addons-054604" [28260261-42b5-4215-a767-25d752dc219c] Running
	I1213 10:12:35.320866  908469 system_pods.go:89] "metrics-server-85b7d694d7-2ppdp" [55d8b817-f36a-4527-b64b-aabcc328810b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 10:12:35.320884  908469 system_pods.go:89] "nvidia-device-plugin-daemonset-gzjcp" [b3e1a7fd-9954-4567-821c-410525dd004c] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1213 10:12:35.320911  908469 system_pods.go:89] "registry-6b586f9694-4bxkh" [74643ad6-13cc-45ef-ad16-f7ecd0873ff9] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1213 10:12:35.320933  908469 system_pods.go:89] "registry-creds-764b6fb674-2htf4" [29d9b0f9-2ffa-4a0e-86e7-7a0f5b8da4a9] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1213 10:12:35.320946  908469 system_pods.go:89] "registry-proxy-xclch" [d5e8dae3-581e-4d96-b092-ed60f94f3d00] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1213 10:12:35.320954  908469 system_pods.go:89] "snapshot-controller-7d9fbc56b8-8tp2d" [4159ab23-1000-4f10-8edc-ea73af07f77d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1213 10:12:35.320975  908469 system_pods.go:89] "snapshot-controller-7d9fbc56b8-bbmhp" [88ffaba8-cffd-4b27-ab19-843f22b84185] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1213 10:12:35.320983  908469 system_pods.go:89] "storage-provisioner" [4c794042-57f4-49aa-8f64-71725002278e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 10:12:35.320992  908469 system_pods.go:126] duration metric: took 566.506729ms to wait for k8s-apps to be running ...
	I1213 10:12:35.321015  908469 system_svc.go:44] waiting for kubelet service to be running ....
	I1213 10:12:35.321090  908469 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 10:12:35.347112  908469 system_svc.go:56] duration metric: took 26.085789ms WaitForService to wait for kubelet
	I1213 10:12:35.347190  908469 kubeadm.go:587] duration metric: took 41.846919393s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 10:12:35.347228  908469 node_conditions.go:102] verifying NodePressure condition ...
	I1213 10:12:35.350484  908469 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1213 10:12:35.350565  908469 node_conditions.go:123] node cpu capacity is 2
	I1213 10:12:35.350605  908469 node_conditions.go:105] duration metric: took 3.353976ms to run NodePressure ...
	I1213 10:12:35.350640  908469 start.go:242] waiting for startup goroutines ...
	I1213 10:12:35.423179  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:35.548337  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:35.548553  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:35.687413  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:35.873238  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:36.045632  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:36.045941  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:36.188270  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:36.374188  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:36.543151  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:36.544259  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:36.687282  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:36.877695  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:37.044202  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:37.045036  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:37.188221  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:37.379932  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:37.546660  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:37.547467  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:37.688498  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:37.873357  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:38.046561  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:38.047027  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:38.192205  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:38.374079  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:38.547223  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:38.550528  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:38.691294  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:38.876186  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:39.046124  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:39.046386  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:39.189807  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:39.375775  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:39.547890  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:39.548243  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:39.689010  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:39.874338  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:40.060241  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:40.061092  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:40.188707  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:40.373042  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:40.543588  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:40.545252  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:40.687800  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:40.873294  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:41.045320  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:41.045446  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:41.187383  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:41.372617  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:41.544406  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:41.544784  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:41.688286  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:41.872472  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:42.045582  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:42.045742  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:42.198041  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:42.373080  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:42.544214  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:42.544324  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:42.687731  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:42.873666  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:43.047469  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:43.048081  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:43.189044  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:43.373866  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:43.544469  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:43.544945  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:43.688315  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:43.873329  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:44.044843  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:44.046035  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:44.188621  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:44.372587  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:44.544380  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:44.544823  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:44.687944  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:44.873109  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:45.047180  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:45.047670  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:45.194310  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:45.374493  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:45.546181  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:45.546613  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:45.688869  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:45.873017  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:46.045773  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:46.046177  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:46.187349  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:46.372407  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:46.544253  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:46.544419  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:46.687433  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:46.873137  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:47.044457  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:47.044595  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:47.187801  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:47.373655  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:47.545030  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:47.545224  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:47.688386  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:47.873481  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:48.044679  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:48.045738  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:48.188945  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:48.373733  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:48.544530  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:48.544880  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:48.688449  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:48.873386  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:49.057251  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:49.057838  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:49.188065  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:49.374358  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:49.545753  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:49.546195  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:49.693213  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:49.873684  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:50.045174  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:50.045264  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:50.188418  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:50.373459  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:50.544764  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:50.545136  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:50.688315  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:50.874208  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:51.046780  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:51.050443  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:51.187831  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:51.373579  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:51.543812  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:51.544964  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:51.690260  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:51.873093  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:52.045684  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:52.046318  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:52.188311  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:52.374048  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:52.544561  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:52.544718  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:52.688695  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:52.873403  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:53.046046  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:53.046442  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:53.187418  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:53.373139  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:53.544889  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:53.545324  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:53.689308  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:53.873251  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:54.045938  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:54.046593  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:54.215842  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:54.373707  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:54.545841  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:54.546221  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:54.687200  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:54.873950  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:55.044551  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:55.044719  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:55.187650  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:55.372964  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:55.546360  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:55.548448  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:55.688831  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:55.873526  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:56.045911  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:56.046346  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:56.187233  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:56.373159  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:56.544451  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:56.544660  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:56.687981  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:56.873646  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:57.045105  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:57.045568  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:57.188017  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:57.373474  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:57.544168  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:57.544219  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:57.691682  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:57.873860  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:58.046748  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:58.047352  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:58.187799  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:58.373502  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:58.545194  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:58.545347  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:58.686989  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:58.873164  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:59.044208  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:59.044353  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:59.186981  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:59.373465  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:12:59.554312  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:12:59.554727  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:12:59.688201  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:12:59.874124  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:00.054419  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:13:00.058913  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:13:00.199344  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:13:00.380081  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:00.544510  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:13:00.545128  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:13:00.687215  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:13:00.873431  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:01.047517  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:13:01.048373  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:13:01.187792  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:13:01.373522  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:01.545473  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:13:01.545873  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:13:01.688365  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:13:01.873305  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:02.045168  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:13:02.046191  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:13:02.188684  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:13:02.373851  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:02.544794  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:13:02.546177  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:13:02.690990  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:13:02.875966  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:03.050279  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:13:03.050671  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:13:03.196068  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:13:03.386950  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:03.547166  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:13:03.547607  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:13:03.688241  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:13:03.878777  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:04.044888  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:13:04.045015  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:13:04.188872  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:13:04.373777  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:04.544476  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:13:04.544620  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:13:04.687952  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:13:04.873669  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:05.044706  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:13:05.045102  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:13:05.188219  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:13:05.373628  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:05.544514  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:13:05.544666  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:13:05.687976  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:13:05.873663  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:06.043961  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:13:06.044576  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:13:06.188425  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:13:06.374346  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:06.544781  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:13:06.544945  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:13:06.689518  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:13:06.873177  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:07.044412  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:13:07.044700  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:13:07.192141  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:13:07.373353  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:07.543773  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:13:07.544053  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:13:07.692315  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:13:07.874382  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:08.044708  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:13:08.044743  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:13:08.187562  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:13:08.372798  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:08.545900  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:13:08.546363  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:13:08.687431  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:13:08.873879  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:09.045402  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:13:09.045791  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:13:09.188458  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:13:09.373820  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:09.543297  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:13:09.543406  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:13:09.687766  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:13:09.873713  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:10.045015  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:13:10.047218  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:13:10.188404  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:13:10.373931  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:10.543193  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:13:10.544604  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:13:10.687427  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:13:10.876551  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:11.044394  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:13:11.044539  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:13:11.187673  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:13:11.372860  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:11.544419  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:13:11.544525  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:13:11.687585  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:13:11.873078  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:12.044395  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:13:12.044828  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:13:12.187830  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:13:12.373789  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:12.545064  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:13:12.545374  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:13:12.687138  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:13:12.874214  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:13.044245  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:13:13.044575  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:13:13.187502  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:13:13.376087  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:13.555146  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:13:13.555549  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:13:13.689697  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:13:13.879031  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:14.043676  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:13:14.045018  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:13:14.188501  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:13:14.373850  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:14.545662  908469 kapi.go:107] duration metric: took 1m15.005640056s to wait for kubernetes.io/minikube-addons=registry ...
	I1213 10:13:14.546374  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:13:14.687767  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:13:14.874280  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:15.045344  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:13:15.188185  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:13:15.374268  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:15.544431  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:13:15.687557  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:13:15.873268  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:16.044372  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:13:16.188378  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:13:16.373824  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:16.543872  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:13:16.687485  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:13:16.873433  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:17.043358  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:13:17.187977  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:13:17.375174  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:17.543469  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:13:17.691736  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:13:17.875512  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:18.043733  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:13:18.188423  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:13:18.372606  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:18.546961  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:13:18.687966  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:13:18.874917  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:19.045139  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:13:19.186883  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:13:19.373732  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:19.544392  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:13:19.687993  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:13:19.874057  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:20.043663  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:13:20.193310  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:13:20.373177  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:20.543542  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:13:20.687621  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:13:20.873298  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:21.046380  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:13:21.187371  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:13:21.373633  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:21.551888  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:13:21.696721  908469 kapi.go:107] duration metric: took 1m18.012476607s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1213 10:13:21.700445  908469 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-054604 cluster.
	I1213 10:13:21.703406  908469 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1213 10:13:21.706394  908469 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1213 10:13:21.873347  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:22.043449  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:13:22.372509  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:22.544171  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:13:22.873828  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:23.044154  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:13:23.373379  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:23.543787  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:13:23.873687  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:24.044038  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:13:24.373625  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:24.543950  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:13:24.873457  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:25.043296  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:13:25.372427  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:25.543842  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:13:25.874024  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:26.043563  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:13:26.373745  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:26.544186  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:13:26.873600  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:27.044097  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:13:27.377438  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:27.546884  908469 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:13:27.873791  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:28.046090  908469 kapi.go:107] duration metric: took 1m28.506742164s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1213 10:13:28.378851  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:28.878885  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:29.373036  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:29.873220  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:30.373489  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:30.873792  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:31.373910  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:31.873830  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:32.373755  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:32.878179  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:33.373062  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:33.873040  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:34.372890  908469 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:13:34.873916  908469 kapi.go:107] duration metric: took 1m35.004421787s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1213 10:13:34.876983  908469 out.go:179] * Enabled addons: registry-creds, cloud-spanner, storage-provisioner, ingress-dns, inspektor-gadget, amd-gpu-device-plugin, nvidia-device-plugin, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I1213 10:13:34.879871  908469 addons.go:530] duration metric: took 1m41.379193757s for enable addons: enabled=[registry-creds cloud-spanner storage-provisioner ingress-dns inspektor-gadget amd-gpu-device-plugin nvidia-device-plugin metrics-server yakd storage-provisioner-rancher volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I1213 10:13:34.879947  908469 start.go:247] waiting for cluster config update ...
	I1213 10:13:34.880003  908469 start.go:256] writing updated cluster config ...
	I1213 10:13:34.880347  908469 ssh_runner.go:195] Run: rm -f paused
	I1213 10:13:34.883959  908469 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 10:13:34.887623  908469 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-t662h" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 10:13:34.892854  908469 pod_ready.go:94] pod "coredns-66bc5c9577-t662h" is "Ready"
	I1213 10:13:34.892951  908469 pod_ready.go:86] duration metric: took 5.298311ms for pod "coredns-66bc5c9577-t662h" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 10:13:34.895604  908469 pod_ready.go:83] waiting for pod "etcd-addons-054604" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 10:13:34.900461  908469 pod_ready.go:94] pod "etcd-addons-054604" is "Ready"
	I1213 10:13:34.900492  908469 pod_ready.go:86] duration metric: took 4.859364ms for pod "etcd-addons-054604" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 10:13:34.903122  908469 pod_ready.go:83] waiting for pod "kube-apiserver-addons-054604" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 10:13:34.907949  908469 pod_ready.go:94] pod "kube-apiserver-addons-054604" is "Ready"
	I1213 10:13:34.907981  908469 pod_ready.go:86] duration metric: took 4.829136ms for pod "kube-apiserver-addons-054604" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 10:13:34.910511  908469 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-054604" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 10:13:35.288428  908469 pod_ready.go:94] pod "kube-controller-manager-addons-054604" is "Ready"
	I1213 10:13:35.288458  908469 pod_ready.go:86] duration metric: took 377.917606ms for pod "kube-controller-manager-addons-054604" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 10:13:35.488097  908469 pod_ready.go:83] waiting for pod "kube-proxy-hp7zc" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 10:13:35.888369  908469 pod_ready.go:94] pod "kube-proxy-hp7zc" is "Ready"
	I1213 10:13:35.888398  908469 pod_ready.go:86] duration metric: took 400.267614ms for pod "kube-proxy-hp7zc" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 10:13:36.088286  908469 pod_ready.go:83] waiting for pod "kube-scheduler-addons-054604" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 10:13:36.487416  908469 pod_ready.go:94] pod "kube-scheduler-addons-054604" is "Ready"
	I1213 10:13:36.487464  908469 pod_ready.go:86] duration metric: took 399.130501ms for pod "kube-scheduler-addons-054604" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 10:13:36.487504  908469 pod_ready.go:40] duration metric: took 1.603509098s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 10:13:36.540269  908469 start.go:625] kubectl: 1.33.2, cluster: 1.34.2 (minor skew: 1)
	I1213 10:13:36.543395  908469 out.go:179] * Done! kubectl is now configured to use "addons-054604" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 13 10:13:38 addons-054604 crio[831]: time="2025-12-13T10:13:38.11070575Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 13 10:13:40 addons-054604 crio[831]: time="2025-12-13T10:13:40.223125307Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=b8ebdba0-973a-4c0e-b5c6-ea3ddb6a7659 name=/runtime.v1.ImageService/PullImage
	Dec 13 10:13:40 addons-054604 crio[831]: time="2025-12-13T10:13:40.22427664Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=8c9020a1-9a7e-40bd-a634-58a295e12cd8 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:13:40 addons-054604 crio[831]: time="2025-12-13T10:13:40.227673447Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=80613a55-c69a-4562-8d93-593fe10f4c73 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:13:40 addons-054604 crio[831]: time="2025-12-13T10:13:40.235319281Z" level=info msg="Creating container: default/busybox/busybox" id=91bd9a22-9bd1-4ee7-bbc6-4dddee8c601e name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 10:13:40 addons-054604 crio[831]: time="2025-12-13T10:13:40.235455668Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 10:13:40 addons-054604 crio[831]: time="2025-12-13T10:13:40.242633771Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 10:13:40 addons-054604 crio[831]: time="2025-12-13T10:13:40.243448008Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 10:13:40 addons-054604 crio[831]: time="2025-12-13T10:13:40.263940254Z" level=info msg="Created container 2331841998ff0ea423cef4805322e8621293198bcd541280f0e9f985c8f4eb74: default/busybox/busybox" id=91bd9a22-9bd1-4ee7-bbc6-4dddee8c601e name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 10:13:40 addons-054604 crio[831]: time="2025-12-13T10:13:40.265172909Z" level=info msg="Starting container: 2331841998ff0ea423cef4805322e8621293198bcd541280f0e9f985c8f4eb74" id=154c3743-35af-45e0-864e-753910a8eeec name=/runtime.v1.RuntimeService/StartContainer
	Dec 13 10:13:40 addons-054604 crio[831]: time="2025-12-13T10:13:40.268108902Z" level=info msg="Started container" PID=4944 containerID=2331841998ff0ea423cef4805322e8621293198bcd541280f0e9f985c8f4eb74 description=default/busybox/busybox id=154c3743-35af-45e0-864e-753910a8eeec name=/runtime.v1.RuntimeService/StartContainer sandboxID=4616a2e4f9ac84873d9e9f398abceb4ffb53d1d438c726077fb38389e6e70380
	Dec 13 10:13:47 addons-054604 crio[831]: time="2025-12-13T10:13:47.815980998Z" level=info msg="Removing container: 77965939c6aea60aa9021218ac11da766ef760320e1db22a962d16e66b155cd8" id=2e33512f-6a6b-4ddb-9499-c7751eca2829 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 13 10:13:47 addons-054604 crio[831]: time="2025-12-13T10:13:47.81875859Z" level=info msg="Error loading conmon cgroup of container 77965939c6aea60aa9021218ac11da766ef760320e1db22a962d16e66b155cd8: cgroup deleted" id=2e33512f-6a6b-4ddb-9499-c7751eca2829 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 13 10:13:47 addons-054604 crio[831]: time="2025-12-13T10:13:47.826991885Z" level=info msg="Removed container 77965939c6aea60aa9021218ac11da766ef760320e1db22a962d16e66b155cd8: gcp-auth/gcp-auth-certs-patch-v2hz8/patch" id=2e33512f-6a6b-4ddb-9499-c7751eca2829 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 13 10:13:47 addons-054604 crio[831]: time="2025-12-13T10:13:47.836466155Z" level=info msg="Removing container: ee0c0ea5c585dc68a409fb2daa217f7087768930fe0586cfa99e8a5791397322" id=30fc5767-6fb8-4fa2-9f0f-ba1345630c5d name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 13 10:13:47 addons-054604 crio[831]: time="2025-12-13T10:13:47.83897962Z" level=info msg="Error loading conmon cgroup of container ee0c0ea5c585dc68a409fb2daa217f7087768930fe0586cfa99e8a5791397322: cgroup deleted" id=30fc5767-6fb8-4fa2-9f0f-ba1345630c5d name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 13 10:13:47 addons-054604 crio[831]: time="2025-12-13T10:13:47.847925317Z" level=info msg="Removed container ee0c0ea5c585dc68a409fb2daa217f7087768930fe0586cfa99e8a5791397322: gcp-auth/gcp-auth-certs-create-ksswf/create" id=30fc5767-6fb8-4fa2-9f0f-ba1345630c5d name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 13 10:13:47 addons-054604 crio[831]: time="2025-12-13T10:13:47.85683012Z" level=info msg="Stopping pod sandbox: 1e7ebc1904b087d21e8ffe97db23124bb93352ddaf2923e62a95ee0a5e32b7f9" id=e08c7747-ce39-4af9-ba4b-684676860829 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 13 10:13:47 addons-054604 crio[831]: time="2025-12-13T10:13:47.85728162Z" level=info msg="Stopped pod sandbox (already stopped): 1e7ebc1904b087d21e8ffe97db23124bb93352ddaf2923e62a95ee0a5e32b7f9" id=e08c7747-ce39-4af9-ba4b-684676860829 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 13 10:13:47 addons-054604 crio[831]: time="2025-12-13T10:13:47.865155272Z" level=info msg="Removing pod sandbox: 1e7ebc1904b087d21e8ffe97db23124bb93352ddaf2923e62a95ee0a5e32b7f9" id=647bfaeb-0d5b-4386-8845-a0987fe8709c name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 13 10:13:47 addons-054604 crio[831]: time="2025-12-13T10:13:47.877903993Z" level=info msg="Removed pod sandbox: 1e7ebc1904b087d21e8ffe97db23124bb93352ddaf2923e62a95ee0a5e32b7f9" id=647bfaeb-0d5b-4386-8845-a0987fe8709c name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 13 10:13:47 addons-054604 crio[831]: time="2025-12-13T10:13:47.878673216Z" level=info msg="Stopping pod sandbox: 6a0ac9d3f901f9dcc83ac4142732ea0ad65868c0f9c365fd122e70483c5caf54" id=415ec0f4-52e2-453a-953e-633d230497f5 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 13 10:13:47 addons-054604 crio[831]: time="2025-12-13T10:13:47.878807889Z" level=info msg="Stopped pod sandbox (already stopped): 6a0ac9d3f901f9dcc83ac4142732ea0ad65868c0f9c365fd122e70483c5caf54" id=415ec0f4-52e2-453a-953e-633d230497f5 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 13 10:13:47 addons-054604 crio[831]: time="2025-12-13T10:13:47.879252538Z" level=info msg="Removing pod sandbox: 6a0ac9d3f901f9dcc83ac4142732ea0ad65868c0f9c365fd122e70483c5caf54" id=93c4450e-cb67-4bc7-bc19-a2f7bc0073b2 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 13 10:13:47 addons-054604 crio[831]: time="2025-12-13T10:13:47.886071342Z" level=info msg="Removed pod sandbox: 6a0ac9d3f901f9dcc83ac4142732ea0ad65868c0f9c365fd122e70483c5caf54" id=93c4450e-cb67-4bc7-bc19-a2f7bc0073b2 name=/runtime.v1.RuntimeService/RemovePodSandbox
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	2331841998ff0       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          7 seconds ago        Running             busybox                                  0                   4616a2e4f9ac8       busybox                                     default
	9dfc412275d47       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          14 seconds ago       Running             csi-snapshotter                          0                   34035a86620df       csi-hostpathplugin-8fv49                    kube-system
	411736ab35d31       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          16 seconds ago       Running             csi-provisioner                          0                   34035a86620df       csi-hostpathplugin-8fv49                    kube-system
	fe7aa6350e217       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            17 seconds ago       Running             liveness-probe                           0                   34035a86620df       csi-hostpathplugin-8fv49                    kube-system
	8a3729518104c       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           18 seconds ago       Running             hostpath                                 0                   34035a86620df       csi-hostpathplugin-8fv49                    kube-system
	10f4327e1d3d1       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                19 seconds ago       Running             node-driver-registrar                    0                   34035a86620df       csi-hostpathplugin-8fv49                    kube-system
	1cde751118248       registry.k8s.io/ingress-nginx/controller@sha256:75494e2145fbebf362d24e24e9285b7fbb7da8783ab272092e3126e24ee4776d                             21 seconds ago       Running             controller                               0                   81b13b71009ee       ingress-nginx-controller-85d4c799dd-7hn6s   ingress-nginx
	f2fc5b929f6b1       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 27 seconds ago       Running             gcp-auth                                 0                   16531eadfd416       gcp-auth-78565c9fb4-lvdkf                   gcp-auth
	e94b6593a2663       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:fadc7bf59b69965b6707edb68022bed4f55a1f99b15f7acd272793e48f171496                            30 seconds ago       Running             gadget                                   0                   88cab76c5cc40       gadget-69sd7                                gadget
	1380fedb08b07       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              34 seconds ago       Running             registry-proxy                           0                   c06308ecafd98       registry-proxy-xclch                        kube-system
	441a32eb57b51       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   37 seconds ago       Running             csi-external-health-monitor-controller   0                   34035a86620df       csi-hostpathplugin-8fv49                    kube-system
	f871736240871       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             38 seconds ago       Running             csi-attacher                             0                   cf5357d3d4197       csi-hostpath-attacher-0                     kube-system
	a62092f409cd8       e8105550077f5c6c8e92536651451107053f0e41635396ee42aef596441c179a                                                                             38 seconds ago       Exited              patch                                    2                   51e01e53770cc       ingress-nginx-admission-patch-484xv         ingress-nginx
	3ded6e57579bd       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      40 seconds ago       Running             volume-snapshot-controller               0                   b467b88894539       snapshot-controller-7d9fbc56b8-8tp2d        kube-system
	b0e2d0e7e16b2       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      40 seconds ago       Running             volume-snapshot-controller               0                   c2c08130fe129       snapshot-controller-7d9fbc56b8-bbmhp        kube-system
	c6400c2f13db8       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             41 seconds ago       Running             local-path-provisioner                   0                   b5fdb9b87cfdd       local-path-provisioner-648f6765c9-74jdr     local-path-storage
	02565e8c756a3       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           42 seconds ago       Running             registry                                 0                   9d8c398c5dcec       registry-6b586f9694-4bxkh                   kube-system
	e7a671c2a2145       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:c9c1ef89e4bb9d6c9c6c0b5375c3253a0b951e5b731240be20cebe5593de142d                   44 seconds ago       Exited              create                                   0                   ba8231160a3d0       ingress-nginx-admission-create-x5kpk        ingress-nginx
	17cb3c6d34002       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               45 seconds ago       Running             minikube-ingress-dns                     0                   e617197090ff2       kube-ingress-dns-minikube                   kube-system
	b9bc680915f66       gcr.io/cloud-spanner-emulator/emulator@sha256:daeab9cb1978e02113045625e2633619f465f22aac7638101995f4cd03607170                               56 seconds ago       Running             cloud-spanner-emulator                   0                   4ca206b74a66d       cloud-spanner-emulator-5bdddb765-q7dcr      default
	26436889f5cc9       nvcr.io/nvidia/k8s-device-plugin@sha256:80924fc52384565a7c59f1e2f12319fb8f2b02a1c974bb3d73a9853fe01af874                                     About a minute ago   Running             nvidia-device-plugin-ctr                 0                   eb79efb791625       nvidia-device-plugin-daemonset-gzjcp        kube-system
	43da519983e66       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              About a minute ago   Running             csi-resizer                              0                   bb565e7cb69ea       csi-hostpath-resizer-0                      kube-system
	eceabc21413f6       docker.io/marcnuri/yakd@sha256:0b7e831df7fe4ad1c8c56a736a8d66bd86e243f6777d3c512ead47199d8fbe1a                                              About a minute ago   Running             yakd                                     0                   90c509eb45619       yakd-dashboard-6654c87f9b-wwf8h             yakd-dashboard
	b95fb046aaf43       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        About a minute ago   Running             metrics-server                           0                   1f424dd3ab02e       metrics-server-85b7d694d7-2ppdp             kube-system
	6211c2eaceea4       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             About a minute ago   Running             storage-provisioner                      0                   01a5ac9ef9183       storage-provisioner                         kube-system
	f5883fd88845b       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             About a minute ago   Running             coredns                                  0                   9e8398db2e6d7       coredns-66bc5c9577-t662h                    kube-system
	ef33020503a2d       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786                                                                             About a minute ago   Running             kube-proxy                               0                   ea3874e7b458b       kube-proxy-hp7zc                            kube-system
	5add978c4ef16       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             About a minute ago   Running             kindnet-cni                              0                   91e0693d65d50       kindnet-wx4r9                               kube-system
	dc808fcd2f20c       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949                                                                             2 minutes ago        Running             kube-scheduler                           0                   255eaef63aa04       kube-scheduler-addons-054604                kube-system
	20394cb814363       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42                                                                             2 minutes ago        Running             etcd                                     0                   5d972d122c495       etcd-addons-054604                          kube-system
	e1f2fa7dc8f92       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2                                                                             2 minutes ago        Running             kube-controller-manager                  0                   d01569133b014       kube-controller-manager-addons-054604       kube-system
	3554210f6ef5f       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7                                                                             2 minutes ago        Running             kube-apiserver                           0                   3fc2fbbc4e83e       kube-apiserver-addons-054604                kube-system
	
	
	==> coredns [f5883fd88845b71596a62cc554ff445150ecbdc4f555d4ecde337e35133a26a6] <==
	[INFO] 10.244.0.14:45197 - 48978 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000106102s
	[INFO] 10.244.0.14:45197 - 27507 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.004379916s
	[INFO] 10.244.0.14:45197 - 54454 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.004608695s
	[INFO] 10.244.0.14:45197 - 37517 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000132777s
	[INFO] 10.244.0.14:45197 - 5526 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000199058s
	[INFO] 10.244.0.14:59467 - 24127 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000148662s
	[INFO] 10.244.0.14:59467 - 23905 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000163965s
	[INFO] 10.244.0.14:42631 - 34311 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000112888s
	[INFO] 10.244.0.14:42631 - 34139 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000157935s
	[INFO] 10.244.0.14:47889 - 18634 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000114931s
	[INFO] 10.244.0.14:47889 - 18181 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000176462s
	[INFO] 10.244.0.14:51722 - 61521 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001295613s
	[INFO] 10.244.0.14:51722 - 61324 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001317735s
	[INFO] 10.244.0.14:44518 - 58636 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000115981s
	[INFO] 10.244.0.14:44518 - 58456 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000166468s
	[INFO] 10.244.0.21:58426 - 44169 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000210439s
	[INFO] 10.244.0.21:59614 - 31790 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00014945s
	[INFO] 10.244.0.21:33206 - 60804 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000124145s
	[INFO] 10.244.0.21:37664 - 53028 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000130487s
	[INFO] 10.244.0.21:35513 - 12061 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000128781s
	[INFO] 10.244.0.21:42379 - 23024 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000131054s
	[INFO] 10.244.0.21:57259 - 60829 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.004006554s
	[INFO] 10.244.0.21:34817 - 57279 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.003528534s
	[INFO] 10.244.0.21:52894 - 56547 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.004683584s
	[INFO] 10.244.0.21:41134 - 8800 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.006230342s
	
	
	==> describe nodes <==
	Name:               addons-054604
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-054604
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fb16b7642350f383695d44d1e88d7327f6f14453
	                    minikube.k8s.io/name=addons-054604
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_13T10_11_48_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-054604
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-054604"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 13 Dec 2025 10:11:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-054604
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 13 Dec 2025 10:13:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 13 Dec 2025 10:13:19 +0000   Sat, 13 Dec 2025 10:11:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 13 Dec 2025 10:13:19 +0000   Sat, 13 Dec 2025 10:11:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 13 Dec 2025 10:13:19 +0000   Sat, 13 Dec 2025 10:11:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 13 Dec 2025 10:13:19 +0000   Sat, 13 Dec 2025 10:12:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-054604
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 78f85184c267cd52312ad0096937f858
	  System UUID:                2949b3e3-1bf6-486b-8e0a-6501682d5a50
	  Boot ID:                    ff73813c-a05d-46ba-ba43-f4a4c3dc42b1
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (26 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  default                     cloud-spanner-emulator-5bdddb765-q7dcr       0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  gadget                      gadget-69sd7                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  gcp-auth                    gcp-auth-78565c9fb4-lvdkf                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  ingress-nginx               ingress-nginx-controller-85d4c799dd-7hn6s    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         109s
	  kube-system                 coredns-66bc5c9577-t662h                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     115s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 csi-hostpathplugin-8fv49                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         74s
	  kube-system                 etcd-addons-054604                           100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m
	  kube-system                 kindnet-wx4r9                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      116s
	  kube-system                 kube-apiserver-addons-054604                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 kube-controller-manager-addons-054604        200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-proxy-hp7zc                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-scheduler-addons-054604                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 metrics-server-85b7d694d7-2ppdp              100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         110s
	  kube-system                 nvidia-device-plugin-daemonset-gzjcp         0 (0%)        0 (0%)      0 (0%)           0 (0%)         74s
	  kube-system                 registry-6b586f9694-4bxkh                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 registry-creds-764b6fb674-2htf4              0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 registry-proxy-xclch                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         74s
	  kube-system                 snapshot-controller-7d9fbc56b8-8tp2d         0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 snapshot-controller-7d9fbc56b8-bbmhp         0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  local-path-storage          local-path-provisioner-648f6765c9-74jdr      0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  yakd-dashboard              yakd-dashboard-6654c87f9b-wwf8h              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     110s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 113s  kube-proxy       
	  Normal   Starting                 2m1s  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m1s  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m    kubelet          Node addons-054604 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m    kubelet          Node addons-054604 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m    kubelet          Node addons-054604 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           116s  node-controller  Node addons-054604 event: Registered Node addons-054604 in Controller
	  Normal   NodeReady                74s   kubelet          Node addons-054604 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec13 08:10] kauditd_printk_skb: 8 callbacks suppressed
	[Dec13 10:10] kauditd_printk_skb: 8 callbacks suppressed
	[Dec13 10:11] overlayfs: idmapped layers are currently not supported
	[  +0.076161] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [20394cb8143630b89075746bcaf2fcc0ab2ad362bbfcfdd47a2cd53854bf8283] <==
	{"level":"warn","ts":"2025-12-13T10:11:44.038255Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44970","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:11:44.051737Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:11:44.090745Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:11:44.101323Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:11:44.136333Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45026","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:11:44.143921Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:11:44.159439Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:11:44.174569Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45092","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:11:44.194326Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:11:44.206222Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:11:44.222348Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45138","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:11:44.238984Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45160","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:11:44.253738Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:11:44.268282Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45196","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:11:44.290002Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45218","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:11:44.322224Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:11:44.333722Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45238","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:11:44.348767Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45266","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:11:44.421159Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:12:00.654379Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:12:00.718180Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56230","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:12:22.261085Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:12:22.277741Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:12:22.320191Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45090","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:12:22.335506Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45114","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [f2fc5b929f6b1fffd663ee10be5c61705ab1807c9f8199e01f38faae41bd3143] <==
	2025/12/13 10:13:20 GCP Auth Webhook started!
	2025/12/13 10:13:37 Ready to marshal response ...
	2025/12/13 10:13:37 Ready to write response ...
	2025/12/13 10:13:37 Ready to marshal response ...
	2025/12/13 10:13:37 Ready to write response ...
	2025/12/13 10:13:37 Ready to marshal response ...
	2025/12/13 10:13:37 Ready to write response ...
	
	
	==> kernel <==
	 10:13:48 up  4:56,  0 user,  load average: 3.46, 2.79, 2.17
	Linux addons-054604 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [5add978c4ef1694390a3d23a377353da04049787988a6975f63db25d97f83d26] <==
	I1213 10:11:53.928201       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1213 10:11:53.928529       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1213 10:12:23.926813       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1213 10:12:23.928165       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1213 10:12:23.929429       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1213 10:12:23.929513       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1213 10:12:25.428614       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1213 10:12:25.428646       1 metrics.go:72] Registering metrics
	I1213 10:12:25.428717       1 controller.go:711] "Syncing nftables rules"
	I1213 10:12:33.934394       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 10:12:33.934453       1 main.go:301] handling current node
	I1213 10:12:43.927129       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 10:12:43.927169       1 main.go:301] handling current node
	I1213 10:12:53.926077       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 10:12:53.926110       1 main.go:301] handling current node
	I1213 10:13:03.927083       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 10:13:03.927118       1 main.go:301] handling current node
	I1213 10:13:13.926584       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 10:13:13.926622       1 main.go:301] handling current node
	I1213 10:13:23.927091       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 10:13:23.927122       1 main.go:301] handling current node
	I1213 10:13:33.927128       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 10:13:33.927169       1 main.go:301] handling current node
	I1213 10:13:43.927150       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 10:13:43.927183       1 main.go:301] handling current node
	
	
	==> kube-apiserver [3554210f6ef5f452792fd9b76f594ebd610b0877229d8a31c3107d175d62b9d0] <==
	W1213 10:12:40.143379       1 handler_proxy.go:99] no RequestInfo found in the context
	E1213 10:12:40.143535       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1213 10:12:40.144604       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.145.86:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.108.145.86:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.108.145.86:443: connect: connection refused" logger="UnhandledError"
	W1213 10:12:41.145036       1 handler_proxy.go:99] no RequestInfo found in the context
	E1213 10:12:41.145080       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1213 10:12:41.145094       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1213 10:12:41.145152       1 handler_proxy.go:99] no RequestInfo found in the context
	E1213 10:12:41.145213       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1213 10:12:41.146319       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1213 10:12:45.163096       1 handler_proxy.go:99] no RequestInfo found in the context
	E1213 10:12:45.163160       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1213 10:12:45.163749       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.145.86:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.108.145.86:443/apis/metrics.k8s.io/v1beta1\": context deadline exceeded" logger="UnhandledError"
	I1213 10:12:45.241021       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1213 10:13:45.981399       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:49846: use of closed network connection
	E1213 10:13:46.228049       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:49876: use of closed network connection
	E1213 10:13:46.362409       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:49896: use of closed network connection
	
	
	==> kube-controller-manager [e1f2fa7dc8f92abbea7fb441095c9aeec308a85e7b5d309ca12d373510309517] <==
	I1213 10:11:52.288551       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="addons-054604"
	I1213 10:11:52.288660       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1213 10:11:52.289467       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1213 10:11:52.289554       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1213 10:11:52.289586       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1213 10:11:52.290070       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1213 10:11:52.290889       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1213 10:11:52.290906       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1213 10:11:52.290960       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1213 10:11:52.291104       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1213 10:11:52.291113       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1213 10:11:52.291127       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1213 10:11:52.292645       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1213 10:11:52.292668       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1213 10:11:52.296408       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1213 10:11:52.299583       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1213 10:11:58.324104       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1213 10:12:22.253604       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1213 10:12:22.253756       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1213 10:12:22.253834       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1213 10:12:22.308165       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1213 10:12:22.312573       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1213 10:12:22.354423       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1213 10:12:22.413443       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1213 10:12:37.295229       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [ef33020503a2d05204007d80967d03b004d2f713bb9d624b96f03468c0ea093d] <==
	I1213 10:11:54.382421       1 server_linux.go:53] "Using iptables proxy"
	I1213 10:11:54.528184       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1213 10:11:54.629042       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1213 10:11:54.629114       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1213 10:11:54.629197       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1213 10:11:54.736448       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1213 10:11:54.736504       1 server_linux.go:132] "Using iptables Proxier"
	I1213 10:11:54.751339       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1213 10:11:54.751641       1 server.go:527] "Version info" version="v1.34.2"
	I1213 10:11:54.751655       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 10:11:54.763339       1 config.go:200] "Starting service config controller"
	I1213 10:11:54.763359       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1213 10:11:54.763374       1 config.go:106] "Starting endpoint slice config controller"
	I1213 10:11:54.763378       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1213 10:11:54.763385       1 config.go:403] "Starting serviceCIDR config controller"
	I1213 10:11:54.763389       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1213 10:11:54.764341       1 config.go:309] "Starting node config controller"
	I1213 10:11:54.764350       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1213 10:11:54.764356       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1213 10:11:54.864159       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1213 10:11:54.864203       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1213 10:11:54.864216       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [dc808fcd2f20cbb36aefb288cc12021843e1d6fb5c3826f37451c82b9ec46a14] <==
	E1213 10:11:45.284658       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1213 10:11:45.284727       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1213 10:11:45.289124       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1213 10:11:45.303501       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1213 10:11:45.303629       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1213 10:11:45.303699       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1213 10:11:45.303805       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1213 10:11:45.303879       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1213 10:11:45.303939       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1213 10:11:45.303999       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1213 10:11:45.304054       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1213 10:11:45.304112       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1213 10:11:45.304157       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1213 10:11:45.304204       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1213 10:11:45.304252       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1213 10:11:45.304407       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1213 10:11:45.304472       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1213 10:11:46.132634       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1213 10:11:46.197458       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1213 10:11:46.212219       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1213 10:11:46.227654       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1213 10:11:46.268419       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1213 10:11:46.286953       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1213 10:11:46.296663       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	I1213 10:11:46.866703       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 13 10:13:12 addons-054604 kubelet[1279]: I1213 10:13:12.838329    1279 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7tx9v\" (UniqueName: \"kubernetes.io/projected/2466d150-78b8-44a0-903b-4b337171e9d4-kube-api-access-7tx9v\") pod \"2466d150-78b8-44a0-903b-4b337171e9d4\" (UID: \"2466d150-78b8-44a0-903b-4b337171e9d4\") "
	Dec 13 10:13:12 addons-054604 kubelet[1279]: I1213 10:13:12.845359    1279 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2466d150-78b8-44a0-903b-4b337171e9d4-kube-api-access-7tx9v" (OuterVolumeSpecName: "kube-api-access-7tx9v") pod "2466d150-78b8-44a0-903b-4b337171e9d4" (UID: "2466d150-78b8-44a0-903b-4b337171e9d4"). InnerVolumeSpecName "kube-api-access-7tx9v". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Dec 13 10:13:12 addons-054604 kubelet[1279]: I1213 10:13:12.938907    1279 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7tx9v\" (UniqueName: \"kubernetes.io/projected/2466d150-78b8-44a0-903b-4b337171e9d4-kube-api-access-7tx9v\") on node \"addons-054604\" DevicePath \"\""
	Dec 13 10:13:13 addons-054604 kubelet[1279]: I1213 10:13:13.477032    1279 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a0ac9d3f901f9dcc83ac4142732ea0ad65868c0f9c365fd122e70483c5caf54"
	Dec 13 10:13:14 addons-054604 kubelet[1279]: I1213 10:13:14.487815    1279 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-xclch" secret="" err="secret \"gcp-auth\" not found"
	Dec 13 10:13:15 addons-054604 kubelet[1279]: I1213 10:13:15.491996    1279 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-xclch" secret="" err="secret \"gcp-auth\" not found"
	Dec 13 10:13:18 addons-054604 kubelet[1279]: I1213 10:13:18.519253    1279 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gadget/gadget-69sd7" podStartSLOduration=65.991406534 podStartE2EDuration="1m20.519235906s" podCreationTimestamp="2025-12-13 10:11:58 +0000 UTC" firstStartedPulling="2025-12-13 10:13:03.041986653 +0000 UTC m=+75.373061613" lastFinishedPulling="2025-12-13 10:13:17.569816026 +0000 UTC m=+89.900890985" observedRunningTime="2025-12-13 10:13:18.519066041 +0000 UTC m=+90.850141001" watchObservedRunningTime="2025-12-13 10:13:18.519235906 +0000 UTC m=+90.850310874"
	Dec 13 10:13:18 addons-054604 kubelet[1279]: I1213 10:13:18.520253    1279 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/registry-proxy-xclch" podStartSLOduration=6.061119945 podStartE2EDuration="44.520226753s" podCreationTimestamp="2025-12-13 10:12:34 +0000 UTC" firstStartedPulling="2025-12-13 10:12:35.707310072 +0000 UTC m=+48.038385032" lastFinishedPulling="2025-12-13 10:13:14.166416872 +0000 UTC m=+86.497491840" observedRunningTime="2025-12-13 10:13:14.517349056 +0000 UTC m=+86.848424032" watchObservedRunningTime="2025-12-13 10:13:18.520226753 +0000 UTC m=+90.851301712"
	Dec 13 10:13:23 addons-054604 kubelet[1279]: I1213 10:13:23.232605    1279 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gcp-auth/gcp-auth-78565c9fb4-lvdkf" podStartSLOduration=65.972636454 podStartE2EDuration="1m20.232583724s" podCreationTimestamp="2025-12-13 10:12:03 +0000 UTC" firstStartedPulling="2025-12-13 10:13:06.46620272 +0000 UTC m=+78.797277679" lastFinishedPulling="2025-12-13 10:13:20.726149948 +0000 UTC m=+93.057224949" observedRunningTime="2025-12-13 10:13:21.537503229 +0000 UTC m=+93.868578246" watchObservedRunningTime="2025-12-13 10:13:23.232583724 +0000 UTC m=+95.563658692"
	Dec 13 10:13:27 addons-054604 kubelet[1279]: I1213 10:13:27.568351    1279 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="ingress-nginx/ingress-nginx-controller-85d4c799dd-7hn6s" podStartSLOduration=67.931760911 podStartE2EDuration="1m28.568331046s" podCreationTimestamp="2025-12-13 10:11:59 +0000 UTC" firstStartedPulling="2025-12-13 10:13:06.496931558 +0000 UTC m=+78.828006518" lastFinishedPulling="2025-12-13 10:13:27.133501685 +0000 UTC m=+99.464576653" observedRunningTime="2025-12-13 10:13:27.56718851 +0000 UTC m=+99.898263478" watchObservedRunningTime="2025-12-13 10:13:27.568331046 +0000 UTC m=+99.899406014"
	Dec 13 10:13:31 addons-054604 kubelet[1279]: I1213 10:13:31.010817    1279 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: hostpath.csi.k8s.io endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0
	Dec 13 10:13:31 addons-054604 kubelet[1279]: I1213 10:13:31.011439    1279 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: hostpath.csi.k8s.io at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock
	Dec 13 10:13:34 addons-054604 kubelet[1279]: I1213 10:13:34.615577    1279 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/csi-hostpathplugin-8fv49" podStartSLOduration=2.454318097 podStartE2EDuration="1m0.615551119s" podCreationTimestamp="2025-12-13 10:12:34 +0000 UTC" firstStartedPulling="2025-12-13 10:12:35.425322253 +0000 UTC m=+47.756397221" lastFinishedPulling="2025-12-13 10:13:33.586555275 +0000 UTC m=+105.917630243" observedRunningTime="2025-12-13 10:13:34.613456497 +0000 UTC m=+106.944531473" watchObservedRunningTime="2025-12-13 10:13:34.615551119 +0000 UTC m=+106.946626079"
	Dec 13 10:13:37 addons-054604 kubelet[1279]: I1213 10:13:37.798317    1279 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9186a0f8-a73c-4d8e-bf56-4db415faf59d" path="/var/lib/kubelet/pods/9186a0f8-a73c-4d8e-bf56-4db415faf59d/volumes"
	Dec 13 10:13:37 addons-054604 kubelet[1279]: I1213 10:13:37.884776    1279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/e0da02d5-4d9f-4b25-ad58-dc8915a3077d-gcp-creds\") pod \"busybox\" (UID: \"e0da02d5-4d9f-4b25-ad58-dc8915a3077d\") " pod="default/busybox"
	Dec 13 10:13:37 addons-054604 kubelet[1279]: I1213 10:13:37.884987    1279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhk6k\" (UniqueName: \"kubernetes.io/projected/e0da02d5-4d9f-4b25-ad58-dc8915a3077d-kube-api-access-mhk6k\") pod \"busybox\" (UID: \"e0da02d5-4d9f-4b25-ad58-dc8915a3077d\") " pod="default/busybox"
	Dec 13 10:13:38 addons-054604 kubelet[1279]: W1213 10:13:38.098805    1279 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/218e0b6bff8529458d50df21ae3b67480ee0457432734dc8a39716faf5b2e157/crio-4616a2e4f9ac84873d9e9f398abceb4ffb53d1d438c726077fb38389e6e70380 WatchSource:0}: Error finding container 4616a2e4f9ac84873d9e9f398abceb4ffb53d1d438c726077fb38389e6e70380: Status 404 returned error can't find the container with id 4616a2e4f9ac84873d9e9f398abceb4ffb53d1d438c726077fb38389e6e70380
	Dec 13 10:13:38 addons-054604 kubelet[1279]: E1213 10:13:38.187908    1279 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Dec 13 10:13:38 addons-054604 kubelet[1279]: E1213 10:13:38.188008    1279 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/29d9b0f9-2ffa-4a0e-86e7-7a0f5b8da4a9-gcr-creds podName:29d9b0f9-2ffa-4a0e-86e7-7a0f5b8da4a9 nodeName:}" failed. No retries permitted until 2025-12-13 10:14:42.187989049 +0000 UTC m=+174.519064009 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/29d9b0f9-2ffa-4a0e-86e7-7a0f5b8da4a9-gcr-creds") pod "registry-creds-764b6fb674-2htf4" (UID: "29d9b0f9-2ffa-4a0e-86e7-7a0f5b8da4a9") : secret "registry-creds-gcr" not found
	Dec 13 10:13:40 addons-054604 kubelet[1279]: I1213 10:13:40.653727    1279 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.5319968899999998 podStartE2EDuration="3.653711399s" podCreationTimestamp="2025-12-13 10:13:37 +0000 UTC" firstStartedPulling="2025-12-13 10:13:38.103471384 +0000 UTC m=+110.434546344" lastFinishedPulling="2025-12-13 10:13:40.225185894 +0000 UTC m=+112.556260853" observedRunningTime="2025-12-13 10:13:40.653130805 +0000 UTC m=+112.984205765" watchObservedRunningTime="2025-12-13 10:13:40.653711399 +0000 UTC m=+112.984786359"
	Dec 13 10:13:43 addons-054604 kubelet[1279]: I1213 10:13:43.795803    1279 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2466d150-78b8-44a0-903b-4b337171e9d4" path="/var/lib/kubelet/pods/2466d150-78b8-44a0-903b-4b337171e9d4/volumes"
	Dec 13 10:13:47 addons-054604 kubelet[1279]: I1213 10:13:47.813908    1279 scope.go:117] "RemoveContainer" containerID="77965939c6aea60aa9021218ac11da766ef760320e1db22a962d16e66b155cd8"
	Dec 13 10:13:47 addons-054604 kubelet[1279]: I1213 10:13:47.833368    1279 scope.go:117] "RemoveContainer" containerID="ee0c0ea5c585dc68a409fb2daa217f7087768930fe0586cfa99e8a5791397322"
	Dec 13 10:13:47 addons-054604 kubelet[1279]: E1213 10:13:47.918130    1279 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/71f106f2c004ed9bd6b9670c7efdfd66c445ef72dab9283684bbd7c8969b4419/diff" to get inode usage: stat /var/lib/containers/storage/overlay/71f106f2c004ed9bd6b9670c7efdfd66c445ef72dab9283684bbd7c8969b4419/diff: no such file or directory, extraDiskErr: <nil>
	Dec 13 10:13:47 addons-054604 kubelet[1279]: E1213 10:13:47.943455    1279 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/1f3ebacabf96340f19124b704e626d9d7909d326c998361b27dae88c8c6a9238/diff" to get inode usage: stat /var/lib/containers/storage/overlay/1f3ebacabf96340f19124b704e626d9d7909d326c998361b27dae88c8c6a9238/diff: no such file or directory, extraDiskErr: <nil>
	
	
	==> storage-provisioner [6211c2eaceea48cd7564d6e61228c6caae19ee3c9becf5796dfe85344142c6f9] <==
	W1213 10:13:23.858384       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 10:13:25.861881       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 10:13:25.866886       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 10:13:27.877177       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 10:13:27.887366       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 10:13:29.897520       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 10:13:29.905199       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 10:13:31.908208       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 10:13:31.913811       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 10:13:33.916964       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 10:13:33.924182       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 10:13:35.927138       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 10:13:35.931444       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 10:13:37.937316       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 10:13:37.948372       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 10:13:39.951997       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 10:13:39.959747       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 10:13:41.963092       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 10:13:41.967210       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 10:13:43.970480       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 10:13:43.977775       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 10:13:45.991801       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 10:13:46.000100       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 10:13:48.005208       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 10:13:48.017657       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-054604 -n addons-054604
helpers_test.go:270: (dbg) Run:  kubectl --context addons-054604 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: ingress-nginx-admission-create-x5kpk ingress-nginx-admission-patch-484xv registry-creds-764b6fb674-2htf4
helpers_test.go:283: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context addons-054604 describe pod ingress-nginx-admission-create-x5kpk ingress-nginx-admission-patch-484xv registry-creds-764b6fb674-2htf4
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context addons-054604 describe pod ingress-nginx-admission-create-x5kpk ingress-nginx-admission-patch-484xv registry-creds-764b6fb674-2htf4: exit status 1 (90.737488ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-x5kpk" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-484xv" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-2htf4" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context addons-054604 describe pod ingress-nginx-admission-create-x5kpk ingress-nginx-admission-patch-484xv registry-creds-764b6fb674-2htf4: exit status 1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-054604 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-054604 addons disable headlamp --alsologtostderr -v=1: exit status 11 (265.345791ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 10:13:49.701087  915022 out.go:360] Setting OutFile to fd 1 ...
	I1213 10:13:49.701983  915022 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:13:49.701999  915022 out.go:374] Setting ErrFile to fd 2...
	I1213 10:13:49.702005  915022 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:13:49.702303  915022 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-904040/.minikube/bin
	I1213 10:13:49.702602  915022 mustload.go:66] Loading cluster: addons-054604
	I1213 10:13:49.702988  915022 config.go:182] Loaded profile config "addons-054604": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 10:13:49.703007  915022 addons.go:622] checking whether the cluster is paused
	I1213 10:13:49.703116  915022 config.go:182] Loaded profile config "addons-054604": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 10:13:49.703437  915022 host.go:66] Checking if "addons-054604" exists ...
	I1213 10:13:49.703994  915022 cli_runner.go:164] Run: docker container inspect addons-054604 --format={{.State.Status}}
	I1213 10:13:49.721769  915022 ssh_runner.go:195] Run: systemctl --version
	I1213 10:13:49.721832  915022 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054604
	I1213 10:13:49.740445  915022 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/addons-054604/id_rsa Username:docker}
	I1213 10:13:49.848598  915022 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 10:13:49.848684  915022 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 10:13:49.878980  915022 cri.go:89] found id: "9dfc412275d47430708bdd9da315ac44e2974752210e0b6c277cd82d7ab105d8"
	I1213 10:13:49.879003  915022 cri.go:89] found id: "411736ab35d3122ee9b77c4fc64fcbda3f988a45ef73863ff00fa52c4fcbb5c7"
	I1213 10:13:49.879009  915022 cri.go:89] found id: "fe7aa6350e217489746a35073aa2da5782e8ada1b47068f824336572cc33c246"
	I1213 10:13:49.879024  915022 cri.go:89] found id: "8a3729518104ccd9e11876774115da030c622637d97b6a762aa635c931085794"
	I1213 10:13:49.879028  915022 cri.go:89] found id: "10f4327e1d3d1b22a00362557e4a294a9b51185d58b7fe190a26ecec8dee2672"
	I1213 10:13:49.879032  915022 cri.go:89] found id: "1380fedb08b07c143b7e5940b5fea400bb730d0da239b5791d19f9a08901e231"
	I1213 10:13:49.879036  915022 cri.go:89] found id: "441a32eb57b513f29176ca9f6dd18328104a9b5fa79ee380cd08a41f7978ec90"
	I1213 10:13:49.879039  915022 cri.go:89] found id: "f871736240871de0f1ef464a002684e2ece515c0dfa8fd5f8d5b13b4e565c68e"
	I1213 10:13:49.879043  915022 cri.go:89] found id: "3ded6e57579bd0a8c2ad26ac6e93cbdb9c7b06cd00dbc61e1e85f832e73f085f"
	I1213 10:13:49.879050  915022 cri.go:89] found id: "b0e2d0e7e16b279110b53187dea2355419b23a459b2cf25f96d88c2db0f68d2b"
	I1213 10:13:49.879081  915022 cri.go:89] found id: "02565e8c756a3812567eee3588c54df43097a3c2581ba9063db5d5e26597a5cc"
	I1213 10:13:49.879094  915022 cri.go:89] found id: "17cb3c6d340024b2539323934bdce363102d990353d8c21c5c48b7842be6369c"
	I1213 10:13:49.879098  915022 cri.go:89] found id: "26436889f5cc97c312f42a74136b21c2c06338b4dfc8ef04436984ecf52e0137"
	I1213 10:13:49.879101  915022 cri.go:89] found id: "43da519983e66338f032f8e084a64f058e11aee1e710391af977a7cac3c7a851"
	I1213 10:13:49.879105  915022 cri.go:89] found id: "b95fb046aaf43fa10b6d2e5c93912f378f6234809f2725f67302ba08933bf075"
	I1213 10:13:49.879110  915022 cri.go:89] found id: "6211c2eaceea48cd7564d6e61228c6caae19ee3c9becf5796dfe85344142c6f9"
	I1213 10:13:49.879116  915022 cri.go:89] found id: "f5883fd88845b71596a62cc554ff445150ecbdc4f555d4ecde337e35133a26a6"
	I1213 10:13:49.879121  915022 cri.go:89] found id: "ef33020503a2d05204007d80967d03b004d2f713bb9d624b96f03468c0ea093d"
	I1213 10:13:49.879125  915022 cri.go:89] found id: "5add978c4ef1694390a3d23a377353da04049787988a6975f63db25d97f83d26"
	I1213 10:13:49.879128  915022 cri.go:89] found id: "dc808fcd2f20cbb36aefb288cc12021843e1d6fb5c3826f37451c82b9ec46a14"
	I1213 10:13:49.879134  915022 cri.go:89] found id: "20394cb8143630b89075746bcaf2fcc0ab2ad362bbfcfdd47a2cd53854bf8283"
	I1213 10:13:49.879137  915022 cri.go:89] found id: "e1f2fa7dc8f92abbea7fb441095c9aeec308a85e7b5d309ca12d373510309517"
	I1213 10:13:49.879141  915022 cri.go:89] found id: "3554210f6ef5f452792fd9b76f594ebd610b0877229d8a31c3107d175d62b9d0"
	I1213 10:13:49.879144  915022 cri.go:89] found id: ""
	I1213 10:13:49.879197  915022 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 10:13:49.895058  915022 out.go:203] 
	W1213 10:13:49.897823  915022 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T10:13:49Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T10:13:49Z" level=error msg="open /run/runc: no such file or directory"
	
	W1213 10:13:49.897844  915022 out.go:285] * 
	* 
	W1213 10:13:49.905224  915022 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 10:13:49.908391  915022 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable headlamp addon: args "out/minikube-linux-arm64 -p addons-054604 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (3.28s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.28s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:353: "cloud-spanner-emulator-5bdddb765-q7dcr" [9a5ef2cd-9241-42f8-b997-11d39887bf60] Running
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.004445615s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-054604 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-054604 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (269.07998ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 10:15:10.638365  916933 out.go:360] Setting OutFile to fd 1 ...
	I1213 10:15:10.639373  916933 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:15:10.639389  916933 out.go:374] Setting ErrFile to fd 2...
	I1213 10:15:10.639427  916933 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:15:10.639977  916933 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-904040/.minikube/bin
	I1213 10:15:10.640354  916933 mustload.go:66] Loading cluster: addons-054604
	I1213 10:15:10.640745  916933 config.go:182] Loaded profile config "addons-054604": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 10:15:10.640764  916933 addons.go:622] checking whether the cluster is paused
	I1213 10:15:10.640875  916933 config.go:182] Loaded profile config "addons-054604": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 10:15:10.640892  916933 host.go:66] Checking if "addons-054604" exists ...
	I1213 10:15:10.641473  916933 cli_runner.go:164] Run: docker container inspect addons-054604 --format={{.State.Status}}
	I1213 10:15:10.668351  916933 ssh_runner.go:195] Run: systemctl --version
	I1213 10:15:10.668413  916933 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054604
	I1213 10:15:10.686679  916933 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/addons-054604/id_rsa Username:docker}
	I1213 10:15:10.792329  916933 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 10:15:10.792465  916933 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 10:15:10.825725  916933 cri.go:89] found id: "9dfc412275d47430708bdd9da315ac44e2974752210e0b6c277cd82d7ab105d8"
	I1213 10:15:10.825793  916933 cri.go:89] found id: "411736ab35d3122ee9b77c4fc64fcbda3f988a45ef73863ff00fa52c4fcbb5c7"
	I1213 10:15:10.825805  916933 cri.go:89] found id: "fe7aa6350e217489746a35073aa2da5782e8ada1b47068f824336572cc33c246"
	I1213 10:15:10.825809  916933 cri.go:89] found id: "8a3729518104ccd9e11876774115da030c622637d97b6a762aa635c931085794"
	I1213 10:15:10.825812  916933 cri.go:89] found id: "10f4327e1d3d1b22a00362557e4a294a9b51185d58b7fe190a26ecec8dee2672"
	I1213 10:15:10.825816  916933 cri.go:89] found id: "1380fedb08b07c143b7e5940b5fea400bb730d0da239b5791d19f9a08901e231"
	I1213 10:15:10.825819  916933 cri.go:89] found id: "441a32eb57b513f29176ca9f6dd18328104a9b5fa79ee380cd08a41f7978ec90"
	I1213 10:15:10.825822  916933 cri.go:89] found id: "f871736240871de0f1ef464a002684e2ece515c0dfa8fd5f8d5b13b4e565c68e"
	I1213 10:15:10.825825  916933 cri.go:89] found id: "3ded6e57579bd0a8c2ad26ac6e93cbdb9c7b06cd00dbc61e1e85f832e73f085f"
	I1213 10:15:10.825832  916933 cri.go:89] found id: "b0e2d0e7e16b279110b53187dea2355419b23a459b2cf25f96d88c2db0f68d2b"
	I1213 10:15:10.825836  916933 cri.go:89] found id: "02565e8c756a3812567eee3588c54df43097a3c2581ba9063db5d5e26597a5cc"
	I1213 10:15:10.825839  916933 cri.go:89] found id: "17cb3c6d340024b2539323934bdce363102d990353d8c21c5c48b7842be6369c"
	I1213 10:15:10.825842  916933 cri.go:89] found id: "26436889f5cc97c312f42a74136b21c2c06338b4dfc8ef04436984ecf52e0137"
	I1213 10:15:10.825855  916933 cri.go:89] found id: "43da519983e66338f032f8e084a64f058e11aee1e710391af977a7cac3c7a851"
	I1213 10:15:10.825864  916933 cri.go:89] found id: "b95fb046aaf43fa10b6d2e5c93912f378f6234809f2725f67302ba08933bf075"
	I1213 10:15:10.825869  916933 cri.go:89] found id: "6211c2eaceea48cd7564d6e61228c6caae19ee3c9becf5796dfe85344142c6f9"
	I1213 10:15:10.825873  916933 cri.go:89] found id: "f5883fd88845b71596a62cc554ff445150ecbdc4f555d4ecde337e35133a26a6"
	I1213 10:15:10.825876  916933 cri.go:89] found id: "ef33020503a2d05204007d80967d03b004d2f713bb9d624b96f03468c0ea093d"
	I1213 10:15:10.825879  916933 cri.go:89] found id: "5add978c4ef1694390a3d23a377353da04049787988a6975f63db25d97f83d26"
	I1213 10:15:10.825883  916933 cri.go:89] found id: "dc808fcd2f20cbb36aefb288cc12021843e1d6fb5c3826f37451c82b9ec46a14"
	I1213 10:15:10.825887  916933 cri.go:89] found id: "20394cb8143630b89075746bcaf2fcc0ab2ad362bbfcfdd47a2cd53854bf8283"
	I1213 10:15:10.825895  916933 cri.go:89] found id: "e1f2fa7dc8f92abbea7fb441095c9aeec308a85e7b5d309ca12d373510309517"
	I1213 10:15:10.825898  916933 cri.go:89] found id: "3554210f6ef5f452792fd9b76f594ebd610b0877229d8a31c3107d175d62b9d0"
	I1213 10:15:10.825901  916933 cri.go:89] found id: ""
	I1213 10:15:10.825956  916933 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 10:15:10.841515  916933 out.go:203] 
	W1213 10:15:10.844495  916933 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T10:15:10Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T10:15:10Z" level=error msg="open /run/runc: no such file or directory"
	
	W1213 10:15:10.844524  916933 out.go:285] * 
	* 
	W1213 10:15:10.851847  916933 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 10:15:10.854936  916933 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable cloud-spanner addon: args "out/minikube-linux-arm64 -p addons-054604 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (6.28s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (8.62s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:951: (dbg) Run:  kubectl --context addons-054604 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:957: (dbg) Run:  kubectl --context addons-054604 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:961: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-054604 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-054604 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-054604 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-054604 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-054604 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:353: "test-local-path" [ac4f6e65-bde2-4bf0-9ee5-89bc717ad97d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "test-local-path" [ac4f6e65-bde2-4bf0-9ee5-89bc717ad97d] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "test-local-path" [ac4f6e65-bde2-4bf0-9ee5-89bc717ad97d] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.006234291s
addons_test.go:969: (dbg) Run:  kubectl --context addons-054604 get pvc test-pvc -o=json
addons_test.go:978: (dbg) Run:  out/minikube-linux-arm64 -p addons-054604 ssh "cat /opt/local-path-provisioner/pvc-39853fcc-b135-458e-957a-4cf093e2ffac_default_test-pvc/file1"
addons_test.go:990: (dbg) Run:  kubectl --context addons-054604 delete pod test-local-path
addons_test.go:994: (dbg) Run:  kubectl --context addons-054604 delete pvc test-pvc
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-054604 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-054604 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (283.918431ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 10:15:04.336918  916829 out.go:360] Setting OutFile to fd 1 ...
	I1213 10:15:04.337652  916829 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:15:04.337684  916829 out.go:374] Setting ErrFile to fd 2...
	I1213 10:15:04.337691  916829 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:15:04.338099  916829 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-904040/.minikube/bin
	I1213 10:15:04.338479  916829 mustload.go:66] Loading cluster: addons-054604
	I1213 10:15:04.339155  916829 config.go:182] Loaded profile config "addons-054604": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 10:15:04.339177  916829 addons.go:622] checking whether the cluster is paused
	I1213 10:15:04.339318  916829 config.go:182] Loaded profile config "addons-054604": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 10:15:04.339339  916829 host.go:66] Checking if "addons-054604" exists ...
	I1213 10:15:04.340542  916829 cli_runner.go:164] Run: docker container inspect addons-054604 --format={{.State.Status}}
	I1213 10:15:04.358354  916829 ssh_runner.go:195] Run: systemctl --version
	I1213 10:15:04.358420  916829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054604
	I1213 10:15:04.376194  916829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/addons-054604/id_rsa Username:docker}
	I1213 10:15:04.488234  916829 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 10:15:04.488340  916829 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 10:15:04.532587  916829 cri.go:89] found id: "9dfc412275d47430708bdd9da315ac44e2974752210e0b6c277cd82d7ab105d8"
	I1213 10:15:04.532650  916829 cri.go:89] found id: "411736ab35d3122ee9b77c4fc64fcbda3f988a45ef73863ff00fa52c4fcbb5c7"
	I1213 10:15:04.532670  916829 cri.go:89] found id: "fe7aa6350e217489746a35073aa2da5782e8ada1b47068f824336572cc33c246"
	I1213 10:15:04.532694  916829 cri.go:89] found id: "8a3729518104ccd9e11876774115da030c622637d97b6a762aa635c931085794"
	I1213 10:15:04.532731  916829 cri.go:89] found id: "10f4327e1d3d1b22a00362557e4a294a9b51185d58b7fe190a26ecec8dee2672"
	I1213 10:15:04.532755  916829 cri.go:89] found id: "1380fedb08b07c143b7e5940b5fea400bb730d0da239b5791d19f9a08901e231"
	I1213 10:15:04.532777  916829 cri.go:89] found id: "441a32eb57b513f29176ca9f6dd18328104a9b5fa79ee380cd08a41f7978ec90"
	I1213 10:15:04.532800  916829 cri.go:89] found id: "f871736240871de0f1ef464a002684e2ece515c0dfa8fd5f8d5b13b4e565c68e"
	I1213 10:15:04.532819  916829 cri.go:89] found id: "3ded6e57579bd0a8c2ad26ac6e93cbdb9c7b06cd00dbc61e1e85f832e73f085f"
	I1213 10:15:04.532855  916829 cri.go:89] found id: "b0e2d0e7e16b279110b53187dea2355419b23a459b2cf25f96d88c2db0f68d2b"
	I1213 10:15:04.532874  916829 cri.go:89] found id: "02565e8c756a3812567eee3588c54df43097a3c2581ba9063db5d5e26597a5cc"
	I1213 10:15:04.532897  916829 cri.go:89] found id: "17cb3c6d340024b2539323934bdce363102d990353d8c21c5c48b7842be6369c"
	I1213 10:15:04.532917  916829 cri.go:89] found id: "26436889f5cc97c312f42a74136b21c2c06338b4dfc8ef04436984ecf52e0137"
	I1213 10:15:04.532951  916829 cri.go:89] found id: "43da519983e66338f032f8e084a64f058e11aee1e710391af977a7cac3c7a851"
	I1213 10:15:04.532978  916829 cri.go:89] found id: "b95fb046aaf43fa10b6d2e5c93912f378f6234809f2725f67302ba08933bf075"
	I1213 10:15:04.533001  916829 cri.go:89] found id: "6211c2eaceea48cd7564d6e61228c6caae19ee3c9becf5796dfe85344142c6f9"
	I1213 10:15:04.533042  916829 cri.go:89] found id: "f5883fd88845b71596a62cc554ff445150ecbdc4f555d4ecde337e35133a26a6"
	I1213 10:15:04.533071  916829 cri.go:89] found id: "ef33020503a2d05204007d80967d03b004d2f713bb9d624b96f03468c0ea093d"
	I1213 10:15:04.533095  916829 cri.go:89] found id: "5add978c4ef1694390a3d23a377353da04049787988a6975f63db25d97f83d26"
	I1213 10:15:04.533120  916829 cri.go:89] found id: "dc808fcd2f20cbb36aefb288cc12021843e1d6fb5c3826f37451c82b9ec46a14"
	I1213 10:15:04.533174  916829 cri.go:89] found id: "20394cb8143630b89075746bcaf2fcc0ab2ad362bbfcfdd47a2cd53854bf8283"
	I1213 10:15:04.533200  916829 cri.go:89] found id: "e1f2fa7dc8f92abbea7fb441095c9aeec308a85e7b5d309ca12d373510309517"
	I1213 10:15:04.533222  916829 cri.go:89] found id: "3554210f6ef5f452792fd9b76f594ebd610b0877229d8a31c3107d175d62b9d0"
	I1213 10:15:04.533285  916829 cri.go:89] found id: ""
	I1213 10:15:04.533359  916829 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 10:15:04.556887  916829 out.go:203] 
	W1213 10:15:04.560080  916829 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T10:15:04Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T10:15:04Z" level=error msg="open /run/runc: no such file or directory"
	
	W1213 10:15:04.560116  916829 out.go:285] * 
	* 
	W1213 10:15:04.567494  916829 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 10:15:04.570672  916829 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-arm64 -p addons-054604 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (8.62s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.28s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:353: "nvidia-device-plugin-daemonset-gzjcp" [b3e1a7fd-9954-4567-821c-410525dd004c] Running
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.008625224s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-054604 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-054604 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (270.491567ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 10:14:50.478806  916448 out.go:360] Setting OutFile to fd 1 ...
	I1213 10:14:50.479678  916448 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:14:50.479727  916448 out.go:374] Setting ErrFile to fd 2...
	I1213 10:14:50.479749  916448 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:14:50.480201  916448 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-904040/.minikube/bin
	I1213 10:14:50.480641  916448 mustload.go:66] Loading cluster: addons-054604
	I1213 10:14:50.481350  916448 config.go:182] Loaded profile config "addons-054604": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 10:14:50.481401  916448 addons.go:622] checking whether the cluster is paused
	I1213 10:14:50.481631  916448 config.go:182] Loaded profile config "addons-054604": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 10:14:50.481697  916448 host.go:66] Checking if "addons-054604" exists ...
	I1213 10:14:50.482531  916448 cli_runner.go:164] Run: docker container inspect addons-054604 --format={{.State.Status}}
	I1213 10:14:50.499637  916448 ssh_runner.go:195] Run: systemctl --version
	I1213 10:14:50.499702  916448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054604
	I1213 10:14:50.517128  916448 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/addons-054604/id_rsa Username:docker}
	I1213 10:14:50.628184  916448 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 10:14:50.628301  916448 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 10:14:50.657609  916448 cri.go:89] found id: "9dfc412275d47430708bdd9da315ac44e2974752210e0b6c277cd82d7ab105d8"
	I1213 10:14:50.657681  916448 cri.go:89] found id: "411736ab35d3122ee9b77c4fc64fcbda3f988a45ef73863ff00fa52c4fcbb5c7"
	I1213 10:14:50.657701  916448 cri.go:89] found id: "fe7aa6350e217489746a35073aa2da5782e8ada1b47068f824336572cc33c246"
	I1213 10:14:50.657724  916448 cri.go:89] found id: "8a3729518104ccd9e11876774115da030c622637d97b6a762aa635c931085794"
	I1213 10:14:50.657756  916448 cri.go:89] found id: "10f4327e1d3d1b22a00362557e4a294a9b51185d58b7fe190a26ecec8dee2672"
	I1213 10:14:50.657779  916448 cri.go:89] found id: "1380fedb08b07c143b7e5940b5fea400bb730d0da239b5791d19f9a08901e231"
	I1213 10:14:50.657807  916448 cri.go:89] found id: "441a32eb57b513f29176ca9f6dd18328104a9b5fa79ee380cd08a41f7978ec90"
	I1213 10:14:50.657838  916448 cri.go:89] found id: "f871736240871de0f1ef464a002684e2ece515c0dfa8fd5f8d5b13b4e565c68e"
	I1213 10:14:50.657863  916448 cri.go:89] found id: "3ded6e57579bd0a8c2ad26ac6e93cbdb9c7b06cd00dbc61e1e85f832e73f085f"
	I1213 10:14:50.657888  916448 cri.go:89] found id: "b0e2d0e7e16b279110b53187dea2355419b23a459b2cf25f96d88c2db0f68d2b"
	I1213 10:14:50.657919  916448 cri.go:89] found id: "02565e8c756a3812567eee3588c54df43097a3c2581ba9063db5d5e26597a5cc"
	I1213 10:14:50.657941  916448 cri.go:89] found id: "17cb3c6d340024b2539323934bdce363102d990353d8c21c5c48b7842be6369c"
	I1213 10:14:50.657962  916448 cri.go:89] found id: "26436889f5cc97c312f42a74136b21c2c06338b4dfc8ef04436984ecf52e0137"
	I1213 10:14:50.657997  916448 cri.go:89] found id: "43da519983e66338f032f8e084a64f058e11aee1e710391af977a7cac3c7a851"
	I1213 10:14:50.658019  916448 cri.go:89] found id: "b95fb046aaf43fa10b6d2e5c93912f378f6234809f2725f67302ba08933bf075"
	I1213 10:14:50.658040  916448 cri.go:89] found id: "6211c2eaceea48cd7564d6e61228c6caae19ee3c9becf5796dfe85344142c6f9"
	I1213 10:14:50.658089  916448 cri.go:89] found id: "f5883fd88845b71596a62cc554ff445150ecbdc4f555d4ecde337e35133a26a6"
	I1213 10:14:50.658114  916448 cri.go:89] found id: "ef33020503a2d05204007d80967d03b004d2f713bb9d624b96f03468c0ea093d"
	I1213 10:14:50.658135  916448 cri.go:89] found id: "5add978c4ef1694390a3d23a377353da04049787988a6975f63db25d97f83d26"
	I1213 10:14:50.658167  916448 cri.go:89] found id: "dc808fcd2f20cbb36aefb288cc12021843e1d6fb5c3826f37451c82b9ec46a14"
	I1213 10:14:50.658194  916448 cri.go:89] found id: "20394cb8143630b89075746bcaf2fcc0ab2ad362bbfcfdd47a2cd53854bf8283"
	I1213 10:14:50.658212  916448 cri.go:89] found id: "e1f2fa7dc8f92abbea7fb441095c9aeec308a85e7b5d309ca12d373510309517"
	I1213 10:14:50.658248  916448 cri.go:89] found id: "3554210f6ef5f452792fd9b76f594ebd610b0877229d8a31c3107d175d62b9d0"
	I1213 10:14:50.658269  916448 cri.go:89] found id: ""
	I1213 10:14:50.658359  916448 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 10:14:50.675140  916448 out.go:203] 
	W1213 10:14:50.678384  916448 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T10:14:50Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T10:14:50Z" level=error msg="open /run/runc: no such file or directory"
	
	W1213 10:14:50.678422  916448 out.go:285] * 
	* 
	W1213 10:14:50.685820  916448 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 10:14:50.689198  916448 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-arm64 -p addons-054604 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (6.28s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.26s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:353: "yakd-dashboard-6654c87f9b-wwf8h" [3f3735e3-259a-43b7-876c-e3f9ec8dafa9] Running
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003471972s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-054604 addons disable yakd --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-054604 addons disable yakd --alsologtostderr -v=1: exit status 11 (255.579892ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 10:14:55.744426  916515 out.go:360] Setting OutFile to fd 1 ...
	I1213 10:14:55.745158  916515 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:14:55.745171  916515 out.go:374] Setting ErrFile to fd 2...
	I1213 10:14:55.745177  916515 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:14:55.745427  916515 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-904040/.minikube/bin
	I1213 10:14:55.745752  916515 mustload.go:66] Loading cluster: addons-054604
	I1213 10:14:55.746120  916515 config.go:182] Loaded profile config "addons-054604": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 10:14:55.746139  916515 addons.go:622] checking whether the cluster is paused
	I1213 10:14:55.746247  916515 config.go:182] Loaded profile config "addons-054604": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 10:14:55.746261  916515 host.go:66] Checking if "addons-054604" exists ...
	I1213 10:14:55.746782  916515 cli_runner.go:164] Run: docker container inspect addons-054604 --format={{.State.Status}}
	I1213 10:14:55.763375  916515 ssh_runner.go:195] Run: systemctl --version
	I1213 10:14:55.763436  916515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054604
	I1213 10:14:55.785929  916515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/addons-054604/id_rsa Username:docker}
	I1213 10:14:55.888364  916515 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 10:14:55.888462  916515 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 10:14:55.919870  916515 cri.go:89] found id: "9dfc412275d47430708bdd9da315ac44e2974752210e0b6c277cd82d7ab105d8"
	I1213 10:14:55.919894  916515 cri.go:89] found id: "411736ab35d3122ee9b77c4fc64fcbda3f988a45ef73863ff00fa52c4fcbb5c7"
	I1213 10:14:55.919901  916515 cri.go:89] found id: "fe7aa6350e217489746a35073aa2da5782e8ada1b47068f824336572cc33c246"
	I1213 10:14:55.919917  916515 cri.go:89] found id: "8a3729518104ccd9e11876774115da030c622637d97b6a762aa635c931085794"
	I1213 10:14:55.919922  916515 cri.go:89] found id: "10f4327e1d3d1b22a00362557e4a294a9b51185d58b7fe190a26ecec8dee2672"
	I1213 10:14:55.919925  916515 cri.go:89] found id: "1380fedb08b07c143b7e5940b5fea400bb730d0da239b5791d19f9a08901e231"
	I1213 10:14:55.919928  916515 cri.go:89] found id: "441a32eb57b513f29176ca9f6dd18328104a9b5fa79ee380cd08a41f7978ec90"
	I1213 10:14:55.919931  916515 cri.go:89] found id: "f871736240871de0f1ef464a002684e2ece515c0dfa8fd5f8d5b13b4e565c68e"
	I1213 10:14:55.919935  916515 cri.go:89] found id: "3ded6e57579bd0a8c2ad26ac6e93cbdb9c7b06cd00dbc61e1e85f832e73f085f"
	I1213 10:14:55.919941  916515 cri.go:89] found id: "b0e2d0e7e16b279110b53187dea2355419b23a459b2cf25f96d88c2db0f68d2b"
	I1213 10:14:55.919948  916515 cri.go:89] found id: "02565e8c756a3812567eee3588c54df43097a3c2581ba9063db5d5e26597a5cc"
	I1213 10:14:55.919951  916515 cri.go:89] found id: "17cb3c6d340024b2539323934bdce363102d990353d8c21c5c48b7842be6369c"
	I1213 10:14:55.919955  916515 cri.go:89] found id: "26436889f5cc97c312f42a74136b21c2c06338b4dfc8ef04436984ecf52e0137"
	I1213 10:14:55.919958  916515 cri.go:89] found id: "43da519983e66338f032f8e084a64f058e11aee1e710391af977a7cac3c7a851"
	I1213 10:14:55.919961  916515 cri.go:89] found id: "b95fb046aaf43fa10b6d2e5c93912f378f6234809f2725f67302ba08933bf075"
	I1213 10:14:55.919974  916515 cri.go:89] found id: "6211c2eaceea48cd7564d6e61228c6caae19ee3c9becf5796dfe85344142c6f9"
	I1213 10:14:55.919982  916515 cri.go:89] found id: "f5883fd88845b71596a62cc554ff445150ecbdc4f555d4ecde337e35133a26a6"
	I1213 10:14:55.919986  916515 cri.go:89] found id: "ef33020503a2d05204007d80967d03b004d2f713bb9d624b96f03468c0ea093d"
	I1213 10:14:55.919989  916515 cri.go:89] found id: "5add978c4ef1694390a3d23a377353da04049787988a6975f63db25d97f83d26"
	I1213 10:14:55.919993  916515 cri.go:89] found id: "dc808fcd2f20cbb36aefb288cc12021843e1d6fb5c3826f37451c82b9ec46a14"
	I1213 10:14:55.919998  916515 cri.go:89] found id: "20394cb8143630b89075746bcaf2fcc0ab2ad362bbfcfdd47a2cd53854bf8283"
	I1213 10:14:55.920001  916515 cri.go:89] found id: "e1f2fa7dc8f92abbea7fb441095c9aeec308a85e7b5d309ca12d373510309517"
	I1213 10:14:55.920004  916515 cri.go:89] found id: "3554210f6ef5f452792fd9b76f594ebd610b0877229d8a31c3107d175d62b9d0"
	I1213 10:14:55.920008  916515 cri.go:89] found id: ""
	I1213 10:14:55.920068  916515 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 10:14:55.937568  916515 out.go:203] 
	W1213 10:14:55.940544  916515 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T10:14:55Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T10:14:55Z" level=error msg="open /run/runc: no such file or directory"
	
	W1213 10:14:55.940575  916515 out.go:285] * 
	* 
	W1213 10:14:55.948050  916515 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 10:14:55.950964  916515 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable yakd addon: args "out/minikube-linux-arm64 -p addons-054604 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (5.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (502.46s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-200955 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
E1213 10:21:21.713672  907484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/addons-054604/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:23:37.839923  907484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/addons-054604/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:24:05.556821  907484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/addons-054604/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:25:25.732764  907484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-769798/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:25:25.739147  907484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-769798/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:25:25.750935  907484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-769798/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:25:25.772336  907484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-769798/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:25:25.813812  907484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-769798/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:25:25.895336  907484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-769798/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:25:26.056954  907484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-769798/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:25:26.378679  907484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-769798/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:25:27.020766  907484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-769798/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:25:28.302700  907484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-769798/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:25:30.865677  907484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-769798/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:25:35.987051  907484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-769798/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:25:46.229412  907484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-769798/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:26:06.710899  907484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-769798/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:26:47.672366  907484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-769798/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:28:09.594691  907484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-769798/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:28:37.841164  907484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/addons-054604/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-200955 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 109 (8m21.006052088s)

                                                
                                                
-- stdout --
	* [functional-200955] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22128
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22128-904040/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-904040/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "functional-200955" primary control-plane node in "functional-200955" cluster
	* Pulling base image v0.0.48-1765275396-22083 ...
	* Found network options:
	  - HTTP_PROXY=localhost:43873
	* Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	* Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:43873 to docker env.
	! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [functional-200955 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [functional-200955 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001226177s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001198272s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001198272s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Related issue: https://github.com/kubernetes/minikube/issues/4172

                                                
                                                
** /stderr **
functional_test.go:2241: failed minikube start. args "out/minikube-linux-arm64 start -p functional-200955 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0": exit status 109
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-200955
helpers_test.go:244: (dbg) docker inspect functional-200955:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8d53cd00da87a9082532fc43ffe108cc46c100c4d52d598de1abc5c03d05f0a2",
	        "Created": "2025-12-13T10:21:24.063231347Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 935996,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T10:21:24.120776444Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/8d53cd00da87a9082532fc43ffe108cc46c100c4d52d598de1abc5c03d05f0a2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8d53cd00da87a9082532fc43ffe108cc46c100c4d52d598de1abc5c03d05f0a2/hostname",
	        "HostsPath": "/var/lib/docker/containers/8d53cd00da87a9082532fc43ffe108cc46c100c4d52d598de1abc5c03d05f0a2/hosts",
	        "LogPath": "/var/lib/docker/containers/8d53cd00da87a9082532fc43ffe108cc46c100c4d52d598de1abc5c03d05f0a2/8d53cd00da87a9082532fc43ffe108cc46c100c4d52d598de1abc5c03d05f0a2-json.log",
	        "Name": "/functional-200955",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-200955:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-200955",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8d53cd00da87a9082532fc43ffe108cc46c100c4d52d598de1abc5c03d05f0a2",
	                "LowerDir": "/var/lib/docker/overlay2/88eeb10fcc1b499e31bc1f6077feaba1c1c5b1a510066e3c5f3c7fab1032a7a8-init/diff:/var/lib/docker/overlay2/ae644fe0cc2841f5eea1cee1fab5fa62406b5368ff2c4f1e7ca42815e94a37ad/diff",
	                "MergedDir": "/var/lib/docker/overlay2/88eeb10fcc1b499e31bc1f6077feaba1c1c5b1a510066e3c5f3c7fab1032a7a8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/88eeb10fcc1b499e31bc1f6077feaba1c1c5b1a510066e3c5f3c7fab1032a7a8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/88eeb10fcc1b499e31bc1f6077feaba1c1c5b1a510066e3c5f3c7fab1032a7a8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-200955",
	                "Source": "/var/lib/docker/volumes/functional-200955/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-200955",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-200955",
	                "name.minikube.sigs.k8s.io": "functional-200955",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "766cddaf684c9eda3444b59c94594c94772112ec8d9beb3bf9ab0dee27a031f7",
	            "SandboxKey": "/var/run/docker/netns/766cddaf684c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33523"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33524"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33527"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33525"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33526"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-200955": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "26:41:8f:b5:13:ba",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "cc1684d1fcbfd40cf35af7d1687322fe1e1f6c4d0d51bbc510daab317bab57d4",
	                    "EndpointID": "480d7cd674d03dbe8a8b029c866cc993844939c5b39aa63c9b0d9188a61c29a3",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-200955",
	                        "8d53cd00da87"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-200955 -n functional-200955
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-200955 -n functional-200955: exit status 6 (310.267585ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 10:29:40.427462  941182 status.go:458] kubeconfig endpoint: get endpoint: "functional-200955" does not appear in /home/jenkins/minikube-integration/22128-904040/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-200955 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                       ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-769798 ssh sudo cat /etc/ssl/certs/907484.pem                                                                                          │ functional-769798 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │ 13 Dec 25 10:21 UTC │
	│ ssh            │ functional-769798 ssh sudo cat /usr/share/ca-certificates/907484.pem                                                                              │ functional-769798 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │ 13 Dec 25 10:21 UTC │
	│ ssh            │ functional-769798 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                          │ functional-769798 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │ 13 Dec 25 10:21 UTC │
	│ ssh            │ functional-769798 ssh sudo cat /etc/test/nested/copy/907484/hosts                                                                                 │ functional-769798 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │ 13 Dec 25 10:21 UTC │
	│ ssh            │ functional-769798 ssh sudo cat /etc/ssl/certs/9074842.pem                                                                                         │ functional-769798 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │ 13 Dec 25 10:21 UTC │
	│ cp             │ functional-769798 cp testdata/cp-test.txt /home/docker/cp-test.txt                                                                                │ functional-769798 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │ 13 Dec 25 10:21 UTC │
	│ ssh            │ functional-769798 ssh sudo cat /usr/share/ca-certificates/9074842.pem                                                                             │ functional-769798 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │ 13 Dec 25 10:21 UTC │
	│ ssh            │ functional-769798 ssh -n functional-769798 sudo cat /home/docker/cp-test.txt                                                                      │ functional-769798 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │ 13 Dec 25 10:21 UTC │
	│ ssh            │ functional-769798 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                          │ functional-769798 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │ 13 Dec 25 10:21 UTC │
	│ cp             │ functional-769798 cp functional-769798:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1526269303/001/cp-test.txt                        │ functional-769798 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │ 13 Dec 25 10:21 UTC │
	│ ssh            │ functional-769798 ssh -n functional-769798 sudo cat /home/docker/cp-test.txt                                                                      │ functional-769798 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │ 13 Dec 25 10:21 UTC │
	│ cp             │ functional-769798 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                                         │ functional-769798 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │ 13 Dec 25 10:21 UTC │
	│ image          │ functional-769798 image ls --format short --alsologtostderr                                                                                       │ functional-769798 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │ 13 Dec 25 10:21 UTC │
	│ ssh            │ functional-769798 ssh -n functional-769798 sudo cat /tmp/does/not/exist/cp-test.txt                                                               │ functional-769798 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │ 13 Dec 25 10:21 UTC │
	│ image          │ functional-769798 image ls --format yaml --alsologtostderr                                                                                        │ functional-769798 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │ 13 Dec 25 10:21 UTC │
	│ ssh            │ functional-769798 ssh pgrep buildkitd                                                                                                             │ functional-769798 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │                     │
	│ image          │ functional-769798 image ls --format json --alsologtostderr                                                                                        │ functional-769798 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │ 13 Dec 25 10:21 UTC │
	│ image          │ functional-769798 image build -t localhost/my-image:functional-769798 testdata/build --alsologtostderr                                            │ functional-769798 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │ 13 Dec 25 10:21 UTC │
	│ image          │ functional-769798 image ls --format table --alsologtostderr                                                                                       │ functional-769798 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │ 13 Dec 25 10:21 UTC │
	│ update-context │ functional-769798 update-context --alsologtostderr -v=2                                                                                           │ functional-769798 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │ 13 Dec 25 10:21 UTC │
	│ update-context │ functional-769798 update-context --alsologtostderr -v=2                                                                                           │ functional-769798 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │ 13 Dec 25 10:21 UTC │
	│ update-context │ functional-769798 update-context --alsologtostderr -v=2                                                                                           │ functional-769798 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │ 13 Dec 25 10:21 UTC │
	│ image          │ functional-769798 image ls                                                                                                                        │ functional-769798 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │ 13 Dec 25 10:21 UTC │
	│ delete         │ -p functional-769798                                                                                                                              │ functional-769798 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │ 13 Dec 25 10:21 UTC │
	│ start          │ -p functional-200955 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │                     │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 10:21:19
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 10:21:19.138877  935602 out.go:360] Setting OutFile to fd 1 ...
	I1213 10:21:19.139011  935602 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:21:19.139015  935602 out.go:374] Setting ErrFile to fd 2...
	I1213 10:21:19.139019  935602 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:21:19.139321  935602 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-904040/.minikube/bin
	I1213 10:21:19.139846  935602 out.go:368] Setting JSON to false
	I1213 10:21:19.140856  935602 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":18229,"bootTime":1765603051,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1213 10:21:19.140929  935602 start.go:143] virtualization:  
	I1213 10:21:19.145595  935602 out.go:179] * [functional-200955] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 10:21:19.150416  935602 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 10:21:19.150469  935602 notify.go:221] Checking for updates...
	I1213 10:21:19.157482  935602 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 10:21:19.160891  935602 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22128-904040/kubeconfig
	I1213 10:21:19.164182  935602 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-904040/.minikube
	I1213 10:21:19.167361  935602 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 10:21:19.170672  935602 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 10:21:19.174102  935602 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 10:21:19.196765  935602 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 10:21:19.196877  935602 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:21:19.256860  935602 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-13 10:21:19.247575243 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:21:19.256960  935602 docker.go:319] overlay module found
	I1213 10:21:19.260347  935602 out.go:179] * Using the docker driver based on user configuration
	I1213 10:21:19.263445  935602 start.go:309] selected driver: docker
	I1213 10:21:19.263458  935602 start.go:927] validating driver "docker" against <nil>
	I1213 10:21:19.263472  935602 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 10:21:19.264291  935602 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:21:19.331421  935602 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-13 10:21:19.321040986 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:21:19.331588  935602 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1213 10:21:19.331850  935602 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 10:21:19.334958  935602 out.go:179] * Using Docker driver with root privileges
	I1213 10:21:19.338021  935602 cni.go:84] Creating CNI manager for ""
	I1213 10:21:19.338080  935602 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 10:21:19.338087  935602 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1213 10:21:19.338169  935602 start.go:353] cluster config:
	{Name:functional-200955 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-200955 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:21:19.341455  935602 out.go:179] * Starting "functional-200955" primary control-plane node in "functional-200955" cluster
	I1213 10:21:19.344406  935602 cache.go:134] Beginning downloading kic base image for docker with crio
	I1213 10:21:19.347327  935602 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 10:21:19.350404  935602 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 10:21:19.350440  935602 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22128-904040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1213 10:21:19.350462  935602 cache.go:65] Caching tarball of preloaded images
	I1213 10:21:19.350489  935602 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 10:21:19.350566  935602 preload.go:238] Found /home/jenkins/minikube-integration/22128-904040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1213 10:21:19.350575  935602 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1213 10:21:19.350956  935602 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/config.json ...
	I1213 10:21:19.350978  935602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/config.json: {Name:mke6b901f37061a450420751de1a2890ca3f3938 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:21:19.369461  935602 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 10:21:19.369471  935602 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 10:21:19.369494  935602 cache.go:243] Successfully downloaded all kic artifacts
	I1213 10:21:19.369528  935602 start.go:360] acquireMachinesLock for functional-200955: {Name:mkc5e96275d9db4dc69c44a1e3c60b6575a1e73a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 10:21:19.369660  935602 start.go:364] duration metric: took 98.479µs to acquireMachinesLock for "functional-200955"
	I1213 10:21:19.369687  935602 start.go:93] Provisioning new machine with config: &{Name:functional-200955 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-200955 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 10:21:19.369759  935602 start.go:125] createHost starting for "" (driver="docker")
	I1213 10:21:19.373249  935602 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	W1213 10:21:19.373527  935602 out.go:285] ! Local proxy ignored: not passing HTTP_PROXY=localhost:43873 to docker env.
	I1213 10:21:19.373574  935602 start.go:159] libmachine.API.Create for "functional-200955" (driver="docker")
	I1213 10:21:19.373601  935602 client.go:173] LocalClient.Create starting
	I1213 10:21:19.373684  935602 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca.pem
	I1213 10:21:19.373714  935602 main.go:143] libmachine: Decoding PEM data...
	I1213 10:21:19.373732  935602 main.go:143] libmachine: Parsing certificate...
	I1213 10:21:19.373785  935602 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22128-904040/.minikube/certs/cert.pem
	I1213 10:21:19.373808  935602 main.go:143] libmachine: Decoding PEM data...
	I1213 10:21:19.373819  935602 main.go:143] libmachine: Parsing certificate...
	I1213 10:21:19.374160  935602 cli_runner.go:164] Run: docker network inspect functional-200955 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1213 10:21:19.389399  935602 cli_runner.go:211] docker network inspect functional-200955 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1213 10:21:19.389483  935602 network_create.go:284] running [docker network inspect functional-200955] to gather additional debugging logs...
	I1213 10:21:19.389497  935602 cli_runner.go:164] Run: docker network inspect functional-200955
	W1213 10:21:19.412662  935602 cli_runner.go:211] docker network inspect functional-200955 returned with exit code 1
	I1213 10:21:19.412681  935602 network_create.go:287] error running [docker network inspect functional-200955]: docker network inspect functional-200955: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network functional-200955 not found
	I1213 10:21:19.412692  935602 network_create.go:289] output of [docker network inspect functional-200955]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network functional-200955 not found
	
	** /stderr **
	I1213 10:21:19.412801  935602 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 10:21:19.429119  935602 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001949300}
	I1213 10:21:19.429148  935602 network_create.go:124] attempt to create docker network functional-200955 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1213 10:21:19.429208  935602 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-200955 functional-200955
	I1213 10:21:19.492279  935602 network_create.go:108] docker network functional-200955 192.168.49.0/24 created
	I1213 10:21:19.492301  935602 kic.go:121] calculated static IP "192.168.49.2" for the "functional-200955" container
	I1213 10:21:19.492376  935602 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1213 10:21:19.507604  935602 cli_runner.go:164] Run: docker volume create functional-200955 --label name.minikube.sigs.k8s.io=functional-200955 --label created_by.minikube.sigs.k8s.io=true
	I1213 10:21:19.526952  935602 oci.go:103] Successfully created a docker volume functional-200955
	I1213 10:21:19.527046  935602 cli_runner.go:164] Run: docker run --rm --name functional-200955-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-200955 --entrypoint /usr/bin/test -v functional-200955:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1213 10:21:20.073160  935602 oci.go:107] Successfully prepared a docker volume functional-200955
	I1213 10:21:20.073217  935602 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 10:21:20.073226  935602 kic.go:194] Starting extracting preloaded images to volume ...
	I1213 10:21:20.073308  935602 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22128-904040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v functional-200955:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
	I1213 10:21:23.986947  935602 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22128-904040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v functional-200955:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (3.913602762s)
	I1213 10:21:23.986969  935602 kic.go:203] duration metric: took 3.913740281s to extract preloaded images to volume ...
	W1213 10:21:23.987127  935602 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1213 10:21:23.987232  935602 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1213 10:21:24.047504  935602 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname functional-200955 --name functional-200955 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-200955 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=functional-200955 --network functional-200955 --ip 192.168.49.2 --volume functional-200955:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8441 --publish=127.0.0.1::8441 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1213 10:21:24.333109  935602 cli_runner.go:164] Run: docker container inspect functional-200955 --format={{.State.Running}}
	I1213 10:21:24.355388  935602 cli_runner.go:164] Run: docker container inspect functional-200955 --format={{.State.Status}}
	I1213 10:21:24.374818  935602 cli_runner.go:164] Run: docker exec functional-200955 stat /var/lib/dpkg/alternatives/iptables
	I1213 10:21:24.425657  935602 oci.go:144] the created container "functional-200955" has a running status.
	I1213 10:21:24.425677  935602 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22128-904040/.minikube/machines/functional-200955/id_rsa...
	I1213 10:21:24.751290  935602 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22128-904040/.minikube/machines/functional-200955/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1213 10:21:24.775278  935602 cli_runner.go:164] Run: docker container inspect functional-200955 --format={{.State.Status}}
	I1213 10:21:24.799284  935602 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1213 10:21:24.799296  935602 kic_runner.go:114] Args: [docker exec --privileged functional-200955 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1213 10:21:24.866837  935602 cli_runner.go:164] Run: docker container inspect functional-200955 --format={{.State.Status}}
	I1213 10:21:24.896833  935602 machine.go:94] provisionDockerMachine start ...
	I1213 10:21:24.896912  935602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-200955
	I1213 10:21:24.925222  935602 main.go:143] libmachine: Using SSH client type: native
	I1213 10:21:24.925646  935602 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33523 <nil> <nil>}
	I1213 10:21:24.925654  935602 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 10:21:24.926403  935602 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35426->127.0.0.1:33523: read: connection reset by peer
	I1213 10:21:28.077081  935602 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-200955
	
	I1213 10:21:28.077095  935602 ubuntu.go:182] provisioning hostname "functional-200955"
	I1213 10:21:28.077172  935602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-200955
	I1213 10:21:28.095757  935602 main.go:143] libmachine: Using SSH client type: native
	I1213 10:21:28.096059  935602 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33523 <nil> <nil>}
	I1213 10:21:28.096067  935602 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-200955 && echo "functional-200955" | sudo tee /etc/hostname
	I1213 10:21:28.255501  935602 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-200955
	
	I1213 10:21:28.255579  935602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-200955
	I1213 10:21:28.273371  935602 main.go:143] libmachine: Using SSH client type: native
	I1213 10:21:28.273697  935602 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33523 <nil> <nil>}
	I1213 10:21:28.273711  935602 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-200955' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-200955/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-200955' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 10:21:28.426037  935602 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 10:21:28.426053  935602 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22128-904040/.minikube CaCertPath:/home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22128-904040/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22128-904040/.minikube}
	I1213 10:21:28.426071  935602 ubuntu.go:190] setting up certificates
	I1213 10:21:28.426080  935602 provision.go:84] configureAuth start
	I1213 10:21:28.426141  935602 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-200955
	I1213 10:21:28.444136  935602 provision.go:143] copyHostCerts
	I1213 10:21:28.444195  935602 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-904040/.minikube/ca.pem, removing ...
	I1213 10:21:28.444202  935602 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-904040/.minikube/ca.pem
	I1213 10:21:28.444294  935602 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22128-904040/.minikube/ca.pem (1082 bytes)
	I1213 10:21:28.444398  935602 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-904040/.minikube/cert.pem, removing ...
	I1213 10:21:28.444402  935602 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-904040/.minikube/cert.pem
	I1213 10:21:28.444427  935602 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22128-904040/.minikube/cert.pem (1123 bytes)
	I1213 10:21:28.444487  935602 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-904040/.minikube/key.pem, removing ...
	I1213 10:21:28.444490  935602 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-904040/.minikube/key.pem
	I1213 10:21:28.444513  935602 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22128-904040/.minikube/key.pem (1675 bytes)
	I1213 10:21:28.444568  935602 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22128-904040/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca-key.pem org=jenkins.functional-200955 san=[127.0.0.1 192.168.49.2 functional-200955 localhost minikube]
	I1213 10:21:28.782282  935602 provision.go:177] copyRemoteCerts
	I1213 10:21:28.782336  935602 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 10:21:28.782385  935602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-200955
	I1213 10:21:28.801148  935602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33523 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/functional-200955/id_rsa Username:docker}
	I1213 10:21:28.905230  935602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 10:21:28.922110  935602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 10:21:28.939005  935602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 10:21:28.956054  935602 provision.go:87] duration metric: took 529.952094ms to configureAuth
	I1213 10:21:28.956071  935602 ubuntu.go:206] setting minikube options for container-runtime
	I1213 10:21:28.956289  935602 config.go:182] Loaded profile config "functional-200955": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 10:21:28.956396  935602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-200955
	I1213 10:21:28.973721  935602 main.go:143] libmachine: Using SSH client type: native
	I1213 10:21:28.974054  935602 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33523 <nil> <nil>}
	I1213 10:21:28.974065  935602 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 10:21:29.282632  935602 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 10:21:29.282646  935602 machine.go:97] duration metric: took 4.385801121s to provisionDockerMachine
	I1213 10:21:29.282656  935602 client.go:176] duration metric: took 9.909050214s to LocalClient.Create
	I1213 10:21:29.282669  935602 start.go:167] duration metric: took 9.909095827s to libmachine.API.Create "functional-200955"
	I1213 10:21:29.282675  935602 start.go:293] postStartSetup for "functional-200955" (driver="docker")
	I1213 10:21:29.282686  935602 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 10:21:29.282760  935602 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 10:21:29.282802  935602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-200955
	I1213 10:21:29.300514  935602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33523 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/functional-200955/id_rsa Username:docker}
	I1213 10:21:29.405576  935602 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 10:21:29.408856  935602 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 10:21:29.408876  935602 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 10:21:29.408888  935602 filesync.go:126] Scanning /home/jenkins/minikube-integration/22128-904040/.minikube/addons for local assets ...
	I1213 10:21:29.408950  935602 filesync.go:126] Scanning /home/jenkins/minikube-integration/22128-904040/.minikube/files for local assets ...
	I1213 10:21:29.409041  935602 filesync.go:149] local asset: /home/jenkins/minikube-integration/22128-904040/.minikube/files/etc/ssl/certs/9074842.pem -> 9074842.pem in /etc/ssl/certs
	I1213 10:21:29.409122  935602 filesync.go:149] local asset: /home/jenkins/minikube-integration/22128-904040/.minikube/files/etc/test/nested/copy/907484/hosts -> hosts in /etc/test/nested/copy/907484
	I1213 10:21:29.409167  935602 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/907484
	I1213 10:21:29.417160  935602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/files/etc/ssl/certs/9074842.pem --> /etc/ssl/certs/9074842.pem (1708 bytes)
	I1213 10:21:29.434767  935602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/files/etc/test/nested/copy/907484/hosts --> /etc/test/nested/copy/907484/hosts (40 bytes)
	I1213 10:21:29.452430  935602 start.go:296] duration metric: took 169.742113ms for postStartSetup
	I1213 10:21:29.452798  935602 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-200955
	I1213 10:21:29.469310  935602 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/config.json ...
	I1213 10:21:29.469666  935602 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 10:21:29.469711  935602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-200955
	I1213 10:21:29.487152  935602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33523 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/functional-200955/id_rsa Username:docker}
	I1213 10:21:29.590619  935602 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 10:21:29.595155  935602 start.go:128] duration metric: took 10.225382475s to createHost
	I1213 10:21:29.595171  935602 start.go:83] releasing machines lock for "functional-200955", held for 10.225503083s
	I1213 10:21:29.595242  935602 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-200955
	I1213 10:21:29.615798  935602 out.go:179] * Found network options:
	I1213 10:21:29.618837  935602 out.go:179]   - HTTP_PROXY=localhost:43873
	W1213 10:21:29.621642  935602 out.go:285] ! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
	I1213 10:21:29.624504  935602 out.go:179] * Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	I1213 10:21:29.627427  935602 ssh_runner.go:195] Run: cat /version.json
	I1213 10:21:29.627470  935602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-200955
	I1213 10:21:29.627652  935602 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 10:21:29.627851  935602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-200955
	I1213 10:21:29.651604  935602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33523 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/functional-200955/id_rsa Username:docker}
	I1213 10:21:29.653197  935602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33523 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/functional-200955/id_rsa Username:docker}
	I1213 10:21:29.753085  935602 ssh_runner.go:195] Run: systemctl --version
	I1213 10:21:29.840547  935602 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 10:21:29.876043  935602 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 10:21:29.880280  935602 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 10:21:29.880345  935602 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 10:21:29.909965  935602 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1213 10:21:29.909978  935602 start.go:496] detecting cgroup driver to use...
	I1213 10:21:29.910009  935602 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 10:21:29.910064  935602 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 10:21:29.927497  935602 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 10:21:29.939730  935602 docker.go:218] disabling cri-docker service (if available) ...
	I1213 10:21:29.939783  935602 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 10:21:29.956913  935602 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 10:21:29.975531  935602 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 10:21:30.133870  935602 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 10:21:30.266916  935602 docker.go:234] disabling docker service ...
	I1213 10:21:30.266972  935602 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 10:21:30.289423  935602 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 10:21:30.302519  935602 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 10:21:30.424187  935602 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 10:21:30.546652  935602 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 10:21:30.560430  935602 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 10:21:30.573942  935602 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 10:21:30.574003  935602 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:21:30.582561  935602 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 10:21:30.582624  935602 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:21:30.591660  935602 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:21:30.600398  935602 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:21:30.608981  935602 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 10:21:30.617355  935602 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:21:30.626349  935602 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:21:30.639846  935602 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:21:30.648744  935602 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 10:21:30.656333  935602 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 10:21:30.663889  935602 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:21:30.782768  935602 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 10:21:30.950284  935602 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 10:21:30.950344  935602 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 10:21:30.954048  935602 start.go:564] Will wait 60s for crictl version
	I1213 10:21:30.954119  935602 ssh_runner.go:195] Run: which crictl
	I1213 10:21:30.957555  935602 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 10:21:30.985416  935602 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1213 10:21:30.985511  935602 ssh_runner.go:195] Run: crio --version
	I1213 10:21:31.021297  935602 ssh_runner.go:195] Run: crio --version
	I1213 10:21:31.053607  935602 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1213 10:21:31.056515  935602 cli_runner.go:164] Run: docker network inspect functional-200955 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 10:21:31.072989  935602 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1213 10:21:31.076953  935602 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 10:21:31.087679  935602 kubeadm.go:884] updating cluster {Name:functional-200955 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-200955 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 10:21:31.087794  935602 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 10:21:31.087850  935602 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 10:21:31.124450  935602 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 10:21:31.124462  935602 crio.go:433] Images already preloaded, skipping extraction
	I1213 10:21:31.124522  935602 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 10:21:31.156159  935602 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 10:21:31.156170  935602 cache_images.go:86] Images are preloaded, skipping loading
	I1213 10:21:31.156181  935602 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1213 10:21:31.156273  935602 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-200955 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-200955 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 10:21:31.156359  935602 ssh_runner.go:195] Run: crio config
	I1213 10:21:31.210169  935602 cni.go:84] Creating CNI manager for ""
	I1213 10:21:31.210180  935602 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 10:21:31.210197  935602 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 10:21:31.210220  935602 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-200955 NodeName:functional-200955 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 10:21:31.210332  935602 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-200955"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 10:21:31.210403  935602 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 10:21:31.217937  935602 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 10:21:31.218000  935602 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 10:21:31.225755  935602 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1213 10:21:31.238509  935602 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 10:21:31.251006  935602 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1213 10:21:31.264140  935602 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1213 10:21:31.267916  935602 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 10:21:31.277297  935602 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:21:31.383372  935602 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 10:21:31.398827  935602 certs.go:69] Setting up /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955 for IP: 192.168.49.2
	I1213 10:21:31.398838  935602 certs.go:195] generating shared ca certs ...
	I1213 10:21:31.398865  935602 certs.go:227] acquiring lock for ca certs: {Name:mk8a4f8a0a31c02fdf751ce601bdbbea6f5a03e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:21:31.399020  935602 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22128-904040/.minikube/ca.key
	I1213 10:21:31.399063  935602 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22128-904040/.minikube/proxy-client-ca.key
	I1213 10:21:31.399069  935602 certs.go:257] generating profile certs ...
	I1213 10:21:31.399130  935602 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/client.key
	I1213 10:21:31.399140  935602 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/client.crt with IP's: []
	I1213 10:21:31.633530  935602 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/client.crt ...
	I1213 10:21:31.633566  935602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/client.crt: {Name:mk806fa506c46964b896bfe751d7803c301f80ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:21:31.633797  935602 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/client.key ...
	I1213 10:21:31.633804  935602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/client.key: {Name:mk60a5eb284115b124a3f23ba48fc31e700ad86a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:21:31.633905  935602 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/apiserver.key.8da389ed
	I1213 10:21:31.633917  935602 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/apiserver.crt.8da389ed with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1213 10:21:32.124511  935602 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/apiserver.crt.8da389ed ...
	I1213 10:21:32.124528  935602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/apiserver.crt.8da389ed: {Name:mk8230fb7db8c6b4ec7e6fc2a84562f5d103a1fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:21:32.124761  935602 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/apiserver.key.8da389ed ...
	I1213 10:21:32.124769  935602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/apiserver.key.8da389ed: {Name:mk098d8dcdc957a3671fd87d3bb631429052f50f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:21:32.124857  935602 certs.go:382] copying /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/apiserver.crt.8da389ed -> /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/apiserver.crt
	I1213 10:21:32.124935  935602 certs.go:386] copying /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/apiserver.key.8da389ed -> /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/apiserver.key
	I1213 10:21:32.124992  935602 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/proxy-client.key
	I1213 10:21:32.125004  935602 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/proxy-client.crt with IP's: []
	I1213 10:21:32.874362  935602 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/proxy-client.crt ...
	I1213 10:21:32.874378  935602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/proxy-client.crt: {Name:mka2d71e84e5e24e154d28d2f9ba6ddece401fd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:21:32.874591  935602 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/proxy-client.key ...
	I1213 10:21:32.874600  935602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/proxy-client.key: {Name:mk70c0cb68b2585950f838138423628c01899a3b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:21:32.874804  935602 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/907484.pem (1338 bytes)
	W1213 10:21:32.874844  935602 certs.go:480] ignoring /home/jenkins/minikube-integration/22128-904040/.minikube/certs/907484_empty.pem, impossibly tiny 0 bytes
	I1213 10:21:32.874859  935602 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 10:21:32.874891  935602 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca.pem (1082 bytes)
	I1213 10:21:32.874913  935602 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/cert.pem (1123 bytes)
	I1213 10:21:32.874937  935602 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/key.pem (1675 bytes)
	I1213 10:21:32.874984  935602 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-904040/.minikube/files/etc/ssl/certs/9074842.pem (1708 bytes)
	I1213 10:21:32.875579  935602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 10:21:32.894915  935602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 10:21:32.915585  935602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 10:21:32.936533  935602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 10:21:32.955613  935602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 10:21:32.976763  935602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 10:21:32.997979  935602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 10:21:33.021152  935602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 10:21:33.044554  935602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/files/etc/ssl/certs/9074842.pem --> /usr/share/ca-certificates/9074842.pem (1708 bytes)
	I1213 10:21:33.064696  935602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 10:21:33.083637  935602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/certs/907484.pem --> /usr/share/ca-certificates/907484.pem (1338 bytes)
	I1213 10:21:33.102295  935602 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 10:21:33.115952  935602 ssh_runner.go:195] Run: openssl version
	I1213 10:21:33.122761  935602 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:21:33.130298  935602 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 10:21:33.138564  935602 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:21:33.142537  935602 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 10:11 /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:21:33.142593  935602 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:21:33.184721  935602 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 10:21:33.192671  935602 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1213 10:21:33.200015  935602 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/907484.pem
	I1213 10:21:33.207108  935602 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/907484.pem /etc/ssl/certs/907484.pem
	I1213 10:21:33.214806  935602 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/907484.pem
	I1213 10:21:33.218781  935602 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 10:21 /usr/share/ca-certificates/907484.pem
	I1213 10:21:33.218836  935602 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/907484.pem
	I1213 10:21:33.262976  935602 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 10:21:33.270796  935602 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/907484.pem /etc/ssl/certs/51391683.0
	I1213 10:21:33.278413  935602 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9074842.pem
	I1213 10:21:33.285740  935602 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9074842.pem /etc/ssl/certs/9074842.pem
	I1213 10:21:33.294511  935602 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9074842.pem
	I1213 10:21:33.298553  935602 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 10:21 /usr/share/ca-certificates/9074842.pem
	I1213 10:21:33.298618  935602 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9074842.pem
	I1213 10:21:33.342985  935602 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 10:21:33.351012  935602 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/9074842.pem /etc/ssl/certs/3ec20f2e.0
	I1213 10:21:33.358615  935602 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 10:21:33.362247  935602 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1213 10:21:33.362289  935602 kubeadm.go:401] StartCluster: {Name:functional-200955 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-200955 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:21:33.362354  935602 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 10:21:33.362409  935602 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 10:21:33.389563  935602 cri.go:89] found id: ""
	I1213 10:21:33.389628  935602 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 10:21:33.398278  935602 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 10:21:33.406551  935602 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 10:21:33.406608  935602 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 10:21:33.414285  935602 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 10:21:33.414294  935602 kubeadm.go:158] found existing configuration files:
	
	I1213 10:21:33.414354  935602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1213 10:21:33.422171  935602 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 10:21:33.422224  935602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 10:21:33.429775  935602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1213 10:21:33.437428  935602 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 10:21:33.437493  935602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 10:21:33.445065  935602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1213 10:21:33.453334  935602 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 10:21:33.453386  935602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 10:21:33.461277  935602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1213 10:21:33.469036  935602 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 10:21:33.469133  935602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 10:21:33.477023  935602 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 10:21:33.521322  935602 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 10:21:33.521371  935602 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 10:21:33.595211  935602 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 10:21:33.595276  935602 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 10:21:33.595310  935602 kubeadm.go:319] OS: Linux
	I1213 10:21:33.595354  935602 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 10:21:33.595400  935602 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 10:21:33.595446  935602 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 10:21:33.595493  935602 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 10:21:33.595540  935602 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 10:21:33.595586  935602 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 10:21:33.595633  935602 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 10:21:33.595679  935602 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 10:21:33.595724  935602 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 10:21:33.669912  935602 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 10:21:33.670015  935602 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 10:21:33.670105  935602 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 10:21:33.679496  935602 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 10:21:33.687020  935602 out.go:252]   - Generating certificates and keys ...
	I1213 10:21:33.687114  935602 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 10:21:33.687177  935602 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 10:21:33.771828  935602 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1213 10:21:34.143995  935602 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1213 10:21:34.397665  935602 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1213 10:21:34.783993  935602 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1213 10:21:34.941307  935602 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1213 10:21:34.941657  935602 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [functional-200955 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1213 10:21:35.293907  935602 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1213 10:21:35.294243  935602 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [functional-200955 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1213 10:21:35.497260  935602 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1213 10:21:35.631458  935602 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1213 10:21:36.011490  935602 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1213 10:21:36.012030  935602 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 10:21:36.268192  935602 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 10:21:36.363007  935602 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 10:21:36.646393  935602 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 10:21:36.817007  935602 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 10:21:36.957052  935602 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 10:21:36.957744  935602 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 10:21:36.960967  935602 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 10:21:36.964587  935602 out.go:252]   - Booting up control plane ...
	I1213 10:21:36.964688  935602 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 10:21:36.964765  935602 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 10:21:36.965502  935602 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 10:21:36.981066  935602 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 10:21:36.981276  935602 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 10:21:36.989172  935602 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 10:21:36.989877  935602 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 10:21:36.990053  935602 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 10:21:37.131661  935602 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 10:21:37.131773  935602 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 10:25:37.133158  935602 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001226177s
	I1213 10:25:37.133183  935602 kubeadm.go:319] 
	I1213 10:25:37.133286  935602 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 10:25:37.133346  935602 kubeadm.go:319] 	- The kubelet is not running
	I1213 10:25:37.133782  935602 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 10:25:37.133793  935602 kubeadm.go:319] 
	I1213 10:25:37.133991  935602 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 10:25:37.134050  935602 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 10:25:37.134104  935602 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 10:25:37.134109  935602 kubeadm.go:319] 
	I1213 10:25:37.138977  935602 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 10:25:37.139415  935602 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 10:25:37.139524  935602 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 10:25:37.139880  935602 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1213 10:25:37.139886  935602 kubeadm.go:319] 
	I1213 10:25:37.140015  935602 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1213 10:25:37.140070  935602 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [functional-200955 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [functional-200955 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001226177s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1213 10:25:37.140163  935602 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1213 10:25:37.551364  935602 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 10:25:37.565057  935602 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 10:25:37.565112  935602 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 10:25:37.573969  935602 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 10:25:37.573980  935602 kubeadm.go:158] found existing configuration files:
	
	I1213 10:25:37.574067  935602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1213 10:25:37.582939  935602 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 10:25:37.582995  935602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 10:25:37.591375  935602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1213 10:25:37.600424  935602 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 10:25:37.600480  935602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 10:25:37.608918  935602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1213 10:25:37.617674  935602 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 10:25:37.617731  935602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 10:25:37.626223  935602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1213 10:25:37.635394  935602 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 10:25:37.635453  935602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 10:25:37.644109  935602 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 10:25:37.762260  935602 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 10:25:37.762651  935602 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 10:25:37.826753  935602 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 10:29:39.638675  935602 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1213 10:29:39.638696  935602 kubeadm.go:319] 
	I1213 10:29:39.638768  935602 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1213 10:29:39.644279  935602 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 10:29:39.644363  935602 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 10:29:39.644530  935602 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 10:29:39.644630  935602 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 10:29:39.644689  935602 kubeadm.go:319] OS: Linux
	I1213 10:29:39.644768  935602 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 10:29:39.644853  935602 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 10:29:39.644936  935602 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 10:29:39.645033  935602 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 10:29:39.645129  935602 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 10:29:39.645212  935602 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 10:29:39.645292  935602 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 10:29:39.645389  935602 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 10:29:39.645585  935602 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 10:29:39.645713  935602 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 10:29:39.645931  935602 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 10:29:39.646096  935602 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 10:29:39.646215  935602 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 10:29:39.649588  935602 out.go:252]   - Generating certificates and keys ...
	I1213 10:29:39.649685  935602 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 10:29:39.649749  935602 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 10:29:39.649840  935602 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 10:29:39.649928  935602 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1213 10:29:39.650012  935602 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 10:29:39.650064  935602 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1213 10:29:39.650125  935602 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1213 10:29:39.650182  935602 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1213 10:29:39.650257  935602 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 10:29:39.650324  935602 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 10:29:39.650367  935602 kubeadm.go:319] [certs] Using the existing "sa" key
	I1213 10:29:39.650424  935602 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 10:29:39.650473  935602 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 10:29:39.650527  935602 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 10:29:39.650582  935602 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 10:29:39.650676  935602 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 10:29:39.650746  935602 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 10:29:39.650837  935602 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 10:29:39.650922  935602 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 10:29:39.653716  935602 out.go:252]   - Booting up control plane ...
	I1213 10:29:39.653810  935602 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 10:29:39.653906  935602 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 10:29:39.653987  935602 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 10:29:39.654084  935602 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 10:29:39.654173  935602 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 10:29:39.654270  935602 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 10:29:39.654349  935602 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 10:29:39.654390  935602 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 10:29:39.654512  935602 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 10:29:39.654609  935602 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 10:29:39.654668  935602 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001198272s
	I1213 10:29:39.654671  935602 kubeadm.go:319] 
	I1213 10:29:39.654723  935602 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 10:29:39.654753  935602 kubeadm.go:319] 	- The kubelet is not running
	I1213 10:29:39.654850  935602 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 10:29:39.654853  935602 kubeadm.go:319] 
	I1213 10:29:39.654951  935602 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 10:29:39.654979  935602 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 10:29:39.655007  935602 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 10:29:39.655074  935602 kubeadm.go:403] duration metric: took 8m6.292787517s to StartCluster
	I1213 10:29:39.655120  935602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:29:39.655182  935602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:29:39.655373  935602 kubeadm.go:319] 
	I1213 10:29:39.680237  935602 cri.go:89] found id: ""
	I1213 10:29:39.680261  935602 logs.go:282] 0 containers: []
	W1213 10:29:39.680273  935602 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:29:39.680278  935602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:29:39.680338  935602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:29:39.707314  935602 cri.go:89] found id: ""
	I1213 10:29:39.707328  935602 logs.go:282] 0 containers: []
	W1213 10:29:39.707346  935602 logs.go:284] No container was found matching "etcd"
	I1213 10:29:39.707351  935602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:29:39.707417  935602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:29:39.736274  935602 cri.go:89] found id: ""
	I1213 10:29:39.736287  935602 logs.go:282] 0 containers: []
	W1213 10:29:39.736293  935602 logs.go:284] No container was found matching "coredns"
	I1213 10:29:39.736298  935602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:29:39.736357  935602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:29:39.762315  935602 cri.go:89] found id: ""
	I1213 10:29:39.762329  935602 logs.go:282] 0 containers: []
	W1213 10:29:39.762335  935602 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:29:39.762340  935602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:29:39.762408  935602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:29:39.792549  935602 cri.go:89] found id: ""
	I1213 10:29:39.792562  935602 logs.go:282] 0 containers: []
	W1213 10:29:39.792569  935602 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:29:39.792574  935602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:29:39.792632  935602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:29:39.820292  935602 cri.go:89] found id: ""
	I1213 10:29:39.820315  935602 logs.go:282] 0 containers: []
	W1213 10:29:39.820322  935602 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:29:39.820326  935602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:29:39.820392  935602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:29:39.845323  935602 cri.go:89] found id: ""
	I1213 10:29:39.845338  935602 logs.go:282] 0 containers: []
	W1213 10:29:39.845344  935602 logs.go:284] No container was found matching "kindnet"
	I1213 10:29:39.845352  935602 logs.go:123] Gathering logs for kubelet ...
	I1213 10:29:39.845363  935602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:29:39.913104  935602 logs.go:123] Gathering logs for dmesg ...
	I1213 10:29:39.913123  935602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:29:39.928315  935602 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:29:39.928335  935602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:29:39.997159  935602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:29:39.988748    4856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:29:39.989643    4856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:29:39.991318    4856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:29:39.991635    4856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:29:39.993137    4856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:29:39.988748    4856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:29:39.989643    4856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:29:39.991318    4856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:29:39.991635    4856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:29:39.993137    4856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:29:39.997174  935602 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:29:39.997186  935602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:29:40.035465  935602 logs.go:123] Gathering logs for container status ...
	I1213 10:29:40.035488  935602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 10:29:40.079522  935602 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001198272s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1213 10:29:40.079555  935602 out.go:285] * 
	W1213 10:29:40.079666  935602 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001198272s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 10:29:40.079688  935602 out.go:285] * 
	W1213 10:29:40.081925  935602 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 10:29:40.086871  935602 out.go:203] 
	W1213 10:29:40.090909  935602 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001198272s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 10:29:40.090971  935602 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1213 10:29:40.090990  935602 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1213 10:29:40.094213  935602 out.go:203] 
	
	
	==> CRI-O <==
	Dec 13 10:21:30 functional-200955 crio[845]: time="2025-12-13T10:21:30.942845325Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 13 10:21:30 functional-200955 crio[845]: time="2025-12-13T10:21:30.94288567Z" level=info msg="Starting seccomp notifier watcher"
	Dec 13 10:21:30 functional-200955 crio[845]: time="2025-12-13T10:21:30.94295086Z" level=info msg="Create NRI interface"
	Dec 13 10:21:30 functional-200955 crio[845]: time="2025-12-13T10:21:30.943096774Z" level=info msg="built-in NRI default validator is disabled"
	Dec 13 10:21:30 functional-200955 crio[845]: time="2025-12-13T10:21:30.94311636Z" level=info msg="runtime interface created"
	Dec 13 10:21:30 functional-200955 crio[845]: time="2025-12-13T10:21:30.943132721Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 13 10:21:30 functional-200955 crio[845]: time="2025-12-13T10:21:30.943140196Z" level=info msg="runtime interface starting up..."
	Dec 13 10:21:30 functional-200955 crio[845]: time="2025-12-13T10:21:30.943150616Z" level=info msg="starting plugins..."
	Dec 13 10:21:30 functional-200955 crio[845]: time="2025-12-13T10:21:30.943163925Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 13 10:21:30 functional-200955 crio[845]: time="2025-12-13T10:21:30.94326512Z" level=info msg="No systemd watchdog enabled"
	Dec 13 10:21:30 functional-200955 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 13 10:21:33 functional-200955 crio[845]: time="2025-12-13T10:21:33.673643005Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=2750e43e-1b1c-4f09-b686-a7ba6ff8a63c name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:21:33 functional-200955 crio[845]: time="2025-12-13T10:21:33.674934737Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=9b75e3e8-3024-432c-ad6e-d6f9314dd683 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:21:33 functional-200955 crio[845]: time="2025-12-13T10:21:33.675594248Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=bbb56078-b629-4d2d-8638-3c4b5d4dbe6a name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:21:33 functional-200955 crio[845]: time="2025-12-13T10:21:33.676287762Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=37fa79ca-1a4e-4436-8dcc-ab4c154c1724 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:21:33 functional-200955 crio[845]: time="2025-12-13T10:21:33.67686975Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=f2afa5a7-8fdc-4408-9b4f-51351372829f name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:21:33 functional-200955 crio[845]: time="2025-12-13T10:21:33.677461962Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=a9bedd29-890d-4897-8dea-3b7d0851640d name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:21:33 functional-200955 crio[845]: time="2025-12-13T10:21:33.678112718Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=28c51a2e-300f-4029-b3dc-f1b86b189378 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:25:37 functional-200955 crio[845]: time="2025-12-13T10:25:37.830477176Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=3621418e-cfc4-419a-a45a-fd189d50a7b5 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:25:37 functional-200955 crio[845]: time="2025-12-13T10:25:37.831306505Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=d5eb6e7c-8b67-4b1e-b1cf-9ac410640e62 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:25:37 functional-200955 crio[845]: time="2025-12-13T10:25:37.831850819Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=9852bd04-cd3d-4fd5-b0d0-41189d5b284a name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:25:37 functional-200955 crio[845]: time="2025-12-13T10:25:37.832468216Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=b5258a1f-1698-469b-9242-e6d3b84dd776 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:25:37 functional-200955 crio[845]: time="2025-12-13T10:25:37.832993175Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=2e4db869-08c8-4f8a-8809-2a6c496d6a8f name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:25:37 functional-200955 crio[845]: time="2025-12-13T10:25:37.833451302Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=d31b58c2-ecb7-465b-b144-732fd99fce05 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:25:37 functional-200955 crio[845]: time="2025-12-13T10:25:37.833942463Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=5ff92ae3-4d2c-41e8-b9bc-3cd86f60d04a name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:29:41.051525    4980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:29:41.052096    4980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:29:41.053400    4980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:29:41.053909    4980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:29:41.055478    4980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec13 10:10] kauditd_printk_skb: 8 callbacks suppressed
	[Dec13 10:11] overlayfs: idmapped layers are currently not supported
	[  +0.076161] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec13 10:17] overlayfs: idmapped layers are currently not supported
	[Dec13 10:18] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 10:29:41 up  5:12,  0 user,  load average: 0.28, 0.54, 1.18
	Linux functional-200955 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 10:29:38 functional-200955 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:29:39 functional-200955 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 646.
	Dec 13 10:29:39 functional-200955 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:29:39 functional-200955 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:29:39 functional-200955 kubelet[4789]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 10:29:39 functional-200955 kubelet[4789]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 10:29:39 functional-200955 kubelet[4789]: E1213 10:29:39.268791    4789 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:29:39 functional-200955 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:29:39 functional-200955 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:29:39 functional-200955 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 647.
	Dec 13 10:29:39 functional-200955 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:29:39 functional-200955 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:29:40 functional-200955 kubelet[4860]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 10:29:40 functional-200955 kubelet[4860]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 10:29:40 functional-200955 kubelet[4860]: E1213 10:29:40.067948    4860 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:29:40 functional-200955 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:29:40 functional-200955 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:29:40 functional-200955 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 648.
	Dec 13 10:29:40 functional-200955 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:29:40 functional-200955 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:29:40 functional-200955 kubelet[4901]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 10:29:40 functional-200955 kubelet[4901]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 10:29:40 functional-200955 kubelet[4901]: E1213 10:29:40.795486    4901 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:29:40 functional-200955 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:29:40 functional-200955 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-200955 -n functional-200955
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-200955 -n functional-200955: exit status 6 (365.781918ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 10:29:41.535257  941401 status.go:458] kubeconfig endpoint: get endpoint: "functional-200955" does not appear in /home/jenkins/minikube-integration/22128-904040/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "functional-200955" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (502.46s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (368.48s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart
I1213 10:29:41.551535  907484 config.go:182] Loaded profile config "functional-200955": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-200955 --alsologtostderr -v=8
E1213 10:30:25.726269  907484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-769798/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:30:53.436939  907484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-769798/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:33:37.839934  907484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/addons-054604/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:35:00.918238  907484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/addons-054604/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:35:25.726314  907484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-769798/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-200955 --alsologtostderr -v=8: exit status 80 (6m5.446496946s)

                                                
                                                
-- stdout --
	* [functional-200955] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22128
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22128-904040/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-904040/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "functional-200955" primary control-plane node in "functional-200955" cluster
	* Pulling base image v0.0.48-1765275396-22083 ...
	* Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 10:29:41.597851  941476 out.go:360] Setting OutFile to fd 1 ...
	I1213 10:29:41.597968  941476 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:29:41.597980  941476 out.go:374] Setting ErrFile to fd 2...
	I1213 10:29:41.597985  941476 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:29:41.598264  941476 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-904040/.minikube/bin
	I1213 10:29:41.598640  941476 out.go:368] Setting JSON to false
	I1213 10:29:41.599496  941476 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":18731,"bootTime":1765603051,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1213 10:29:41.599570  941476 start.go:143] virtualization:  
	I1213 10:29:41.603284  941476 out.go:179] * [functional-200955] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 10:29:41.606132  941476 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 10:29:41.606240  941476 notify.go:221] Checking for updates...
	I1213 10:29:41.611909  941476 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 10:29:41.614766  941476 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22128-904040/kubeconfig
	I1213 10:29:41.617588  941476 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-904040/.minikube
	I1213 10:29:41.620495  941476 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 10:29:41.623575  941476 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 10:29:41.626951  941476 config.go:182] Loaded profile config "functional-200955": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 10:29:41.627063  941476 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 10:29:41.660528  941476 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 10:29:41.660648  941476 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:29:41.716071  941476 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 10:29:41.706597811 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:29:41.716181  941476 docker.go:319] overlay module found
	I1213 10:29:41.719241  941476 out.go:179] * Using the docker driver based on existing profile
	I1213 10:29:41.721997  941476 start.go:309] selected driver: docker
	I1213 10:29:41.722027  941476 start.go:927] validating driver "docker" against &{Name:functional-200955 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-200955 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:29:41.722127  941476 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 10:29:41.722252  941476 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:29:41.778165  941476 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 10:29:41.768783539 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:29:41.778600  941476 cni.go:84] Creating CNI manager for ""
	I1213 10:29:41.778650  941476 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 10:29:41.778703  941476 start.go:353] cluster config:
	{Name:functional-200955 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-200955 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:29:41.781806  941476 out.go:179] * Starting "functional-200955" primary control-plane node in "functional-200955" cluster
	I1213 10:29:41.784501  941476 cache.go:134] Beginning downloading kic base image for docker with crio
	I1213 10:29:41.787625  941476 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 10:29:41.790577  941476 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 10:29:41.790637  941476 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22128-904040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1213 10:29:41.790650  941476 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 10:29:41.790656  941476 cache.go:65] Caching tarball of preloaded images
	I1213 10:29:41.790739  941476 preload.go:238] Found /home/jenkins/minikube-integration/22128-904040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1213 10:29:41.790750  941476 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1213 10:29:41.790859  941476 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/config.json ...
	I1213 10:29:41.809947  941476 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 10:29:41.809969  941476 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 10:29:41.809989  941476 cache.go:243] Successfully downloaded all kic artifacts
	I1213 10:29:41.810023  941476 start.go:360] acquireMachinesLock for functional-200955: {Name:mkc5e96275d9db4dc69c44a1e3c60b6575a1e73a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 10:29:41.810091  941476 start.go:364] duration metric: took 45.924µs to acquireMachinesLock for "functional-200955"
	I1213 10:29:41.810115  941476 start.go:96] Skipping create...Using existing machine configuration
	I1213 10:29:41.810124  941476 fix.go:54] fixHost starting: 
	I1213 10:29:41.810397  941476 cli_runner.go:164] Run: docker container inspect functional-200955 --format={{.State.Status}}
	I1213 10:29:41.827321  941476 fix.go:112] recreateIfNeeded on functional-200955: state=Running err=<nil>
	W1213 10:29:41.827351  941476 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 10:29:41.830448  941476 out.go:252] * Updating the running docker "functional-200955" container ...
	I1213 10:29:41.830480  941476 machine.go:94] provisionDockerMachine start ...
	I1213 10:29:41.830562  941476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-200955
	I1213 10:29:41.846863  941476 main.go:143] libmachine: Using SSH client type: native
	I1213 10:29:41.847197  941476 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33523 <nil> <nil>}
	I1213 10:29:41.847214  941476 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 10:29:41.996943  941476 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-200955
	
	I1213 10:29:41.996971  941476 ubuntu.go:182] provisioning hostname "functional-200955"
	I1213 10:29:41.997042  941476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-200955
	I1213 10:29:42.018825  941476 main.go:143] libmachine: Using SSH client type: native
	I1213 10:29:42.019169  941476 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33523 <nil> <nil>}
	I1213 10:29:42.019192  941476 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-200955 && echo "functional-200955" | sudo tee /etc/hostname
	I1213 10:29:42.186347  941476 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-200955
	
	I1213 10:29:42.186459  941476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-200955
	I1213 10:29:42.209314  941476 main.go:143] libmachine: Using SSH client type: native
	I1213 10:29:42.209694  941476 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33523 <nil> <nil>}
	I1213 10:29:42.209712  941476 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-200955' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-200955/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-200955' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 10:29:42.370026  941476 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 10:29:42.370125  941476 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22128-904040/.minikube CaCertPath:/home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22128-904040/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22128-904040/.minikube}
	I1213 10:29:42.370174  941476 ubuntu.go:190] setting up certificates
	I1213 10:29:42.370200  941476 provision.go:84] configureAuth start
	I1213 10:29:42.370268  941476 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-200955
	I1213 10:29:42.388638  941476 provision.go:143] copyHostCerts
	I1213 10:29:42.388684  941476 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22128-904040/.minikube/ca.pem
	I1213 10:29:42.388728  941476 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-904040/.minikube/ca.pem, removing ...
	I1213 10:29:42.388739  941476 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-904040/.minikube/ca.pem
	I1213 10:29:42.388819  941476 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22128-904040/.minikube/ca.pem (1082 bytes)
	I1213 10:29:42.388924  941476 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22128-904040/.minikube/cert.pem
	I1213 10:29:42.388947  941476 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-904040/.minikube/cert.pem, removing ...
	I1213 10:29:42.388956  941476 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-904040/.minikube/cert.pem
	I1213 10:29:42.388985  941476 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22128-904040/.minikube/cert.pem (1123 bytes)
	I1213 10:29:42.389034  941476 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22128-904040/.minikube/key.pem
	I1213 10:29:42.389056  941476 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-904040/.minikube/key.pem, removing ...
	I1213 10:29:42.389064  941476 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-904040/.minikube/key.pem
	I1213 10:29:42.389093  941476 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22128-904040/.minikube/key.pem (1675 bytes)
	I1213 10:29:42.389148  941476 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22128-904040/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca-key.pem org=jenkins.functional-200955 san=[127.0.0.1 192.168.49.2 functional-200955 localhost minikube]
	I1213 10:29:42.553052  941476 provision.go:177] copyRemoteCerts
	I1213 10:29:42.553125  941476 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 10:29:42.553174  941476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-200955
	I1213 10:29:42.571937  941476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33523 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/functional-200955/id_rsa Username:docker}
	I1213 10:29:42.681380  941476 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1213 10:29:42.681440  941476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 10:29:42.698297  941476 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-904040/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1213 10:29:42.698381  941476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 10:29:42.715245  941476 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-904040/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1213 10:29:42.715360  941476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 10:29:42.732152  941476 provision.go:87] duration metric: took 361.926272ms to configureAuth
	I1213 10:29:42.732184  941476 ubuntu.go:206] setting minikube options for container-runtime
	I1213 10:29:42.732358  941476 config.go:182] Loaded profile config "functional-200955": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 10:29:42.732458  941476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-200955
	I1213 10:29:42.749290  941476 main.go:143] libmachine: Using SSH client type: native
	I1213 10:29:42.749620  941476 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33523 <nil> <nil>}
	I1213 10:29:42.749643  941476 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 10:29:43.093593  941476 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 10:29:43.093619  941476 machine.go:97] duration metric: took 1.263130563s to provisionDockerMachine
	I1213 10:29:43.093630  941476 start.go:293] postStartSetup for "functional-200955" (driver="docker")
	I1213 10:29:43.093643  941476 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 10:29:43.093703  941476 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 10:29:43.093752  941476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-200955
	I1213 10:29:43.110551  941476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33523 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/functional-200955/id_rsa Username:docker}
	I1213 10:29:43.213067  941476 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 10:29:43.216076  941476 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1213 10:29:43.216096  941476 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1213 10:29:43.216102  941476 command_runner.go:130] > VERSION_ID="12"
	I1213 10:29:43.216108  941476 command_runner.go:130] > VERSION="12 (bookworm)"
	I1213 10:29:43.216112  941476 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1213 10:29:43.216116  941476 command_runner.go:130] > ID=debian
	I1213 10:29:43.216121  941476 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1213 10:29:43.216125  941476 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1213 10:29:43.216147  941476 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1213 10:29:43.216196  941476 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 10:29:43.216219  941476 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 10:29:43.216231  941476 filesync.go:126] Scanning /home/jenkins/minikube-integration/22128-904040/.minikube/addons for local assets ...
	I1213 10:29:43.216286  941476 filesync.go:126] Scanning /home/jenkins/minikube-integration/22128-904040/.minikube/files for local assets ...
	I1213 10:29:43.216365  941476 filesync.go:149] local asset: /home/jenkins/minikube-integration/22128-904040/.minikube/files/etc/ssl/certs/9074842.pem -> 9074842.pem in /etc/ssl/certs
	I1213 10:29:43.216375  941476 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-904040/.minikube/files/etc/ssl/certs/9074842.pem -> /etc/ssl/certs/9074842.pem
	I1213 10:29:43.216452  941476 filesync.go:149] local asset: /home/jenkins/minikube-integration/22128-904040/.minikube/files/etc/test/nested/copy/907484/hosts -> hosts in /etc/test/nested/copy/907484
	I1213 10:29:43.216461  941476 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-904040/.minikube/files/etc/test/nested/copy/907484/hosts -> /etc/test/nested/copy/907484/hosts
	I1213 10:29:43.216512  941476 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/907484
	I1213 10:29:43.223706  941476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/files/etc/ssl/certs/9074842.pem --> /etc/ssl/certs/9074842.pem (1708 bytes)
	I1213 10:29:43.242619  941476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/files/etc/test/nested/copy/907484/hosts --> /etc/test/nested/copy/907484/hosts (40 bytes)
	I1213 10:29:43.261652  941476 start.go:296] duration metric: took 168.007176ms for postStartSetup
	I1213 10:29:43.261748  941476 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 10:29:43.261797  941476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-200955
	I1213 10:29:43.278068  941476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33523 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/functional-200955/id_rsa Username:docker}
	I1213 10:29:43.377852  941476 command_runner.go:130] > 19%
	I1213 10:29:43.378272  941476 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 10:29:43.382521  941476 command_runner.go:130] > 159G
	I1213 10:29:43.382892  941476 fix.go:56] duration metric: took 1.572759496s for fixHost
	I1213 10:29:43.382913  941476 start.go:83] releasing machines lock for "functional-200955", held for 1.572809064s
	I1213 10:29:43.382984  941476 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-200955
	I1213 10:29:43.399315  941476 ssh_runner.go:195] Run: cat /version.json
	I1213 10:29:43.399334  941476 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 10:29:43.399371  941476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-200955
	I1213 10:29:43.399397  941476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-200955
	I1213 10:29:43.423081  941476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33523 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/functional-200955/id_rsa Username:docker}
	I1213 10:29:43.424445  941476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33523 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/functional-200955/id_rsa Username:docker}
	I1213 10:29:43.612877  941476 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1213 10:29:43.615557  941476 command_runner.go:130] > {"iso_version": "v1.37.0-1765151505-21409", "kicbase_version": "v0.0.48-1765275396-22083", "minikube_version": "v1.37.0", "commit": "9f3959633d311997d75aab86f8ff840f224c6486"}
	I1213 10:29:43.615725  941476 ssh_runner.go:195] Run: systemctl --version
	I1213 10:29:43.621711  941476 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1213 10:29:43.621746  941476 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1213 10:29:43.622124  941476 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 10:29:43.667216  941476 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1213 10:29:43.671902  941476 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1213 10:29:43.672160  941476 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 10:29:43.672241  941476 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 10:29:43.679969  941476 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 10:29:43.679994  941476 start.go:496] detecting cgroup driver to use...
	I1213 10:29:43.680025  941476 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 10:29:43.680082  941476 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 10:29:43.694816  941476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 10:29:43.708840  941476 docker.go:218] disabling cri-docker service (if available) ...
	I1213 10:29:43.708902  941476 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 10:29:43.727390  941476 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 10:29:43.741194  941476 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 10:29:43.853170  941476 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 10:29:43.965117  941476 docker.go:234] disabling docker service ...
	I1213 10:29:43.965193  941476 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 10:29:43.981069  941476 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 10:29:43.993651  941476 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 10:29:44.106510  941476 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 10:29:44.230950  941476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 10:29:44.243823  941476 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 10:29:44.258241  941476 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1213 10:29:44.259524  941476 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 10:29:44.259625  941476 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:29:44.267965  941476 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 10:29:44.268046  941476 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:29:44.277059  941476 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:29:44.285643  941476 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:29:44.295522  941476 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 10:29:44.303650  941476 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:29:44.312274  941476 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:29:44.320905  941476 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:29:44.329531  941476 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 10:29:44.336129  941476 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1213 10:29:44.337017  941476 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 10:29:44.344665  941476 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:29:44.479199  941476 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 10:29:44.656815  941476 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 10:29:44.656943  941476 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 10:29:44.660542  941476 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1213 10:29:44.660573  941476 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1213 10:29:44.660581  941476 command_runner.go:130] > Device: 0,72	Inode: 1640        Links: 1
	I1213 10:29:44.660588  941476 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1213 10:29:44.660594  941476 command_runner.go:130] > Access: 2025-12-13 10:29:44.589977594 +0000
	I1213 10:29:44.660602  941476 command_runner.go:130] > Modify: 2025-12-13 10:29:44.589977594 +0000
	I1213 10:29:44.660608  941476 command_runner.go:130] > Change: 2025-12-13 10:29:44.589977594 +0000
	I1213 10:29:44.660615  941476 command_runner.go:130] >  Birth: -
	I1213 10:29:44.660643  941476 start.go:564] Will wait 60s for crictl version
	I1213 10:29:44.660697  941476 ssh_runner.go:195] Run: which crictl
	I1213 10:29:44.664032  941476 command_runner.go:130] > /usr/local/bin/crictl
	I1213 10:29:44.664157  941476 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 10:29:44.686934  941476 command_runner.go:130] > Version:  0.1.0
	I1213 10:29:44.686958  941476 command_runner.go:130] > RuntimeName:  cri-o
	I1213 10:29:44.686965  941476 command_runner.go:130] > RuntimeVersion:  1.34.3
	I1213 10:29:44.686970  941476 command_runner.go:130] > RuntimeApiVersion:  v1
	I1213 10:29:44.687007  941476 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1213 10:29:44.687101  941476 ssh_runner.go:195] Run: crio --version
	I1213 10:29:44.715374  941476 command_runner.go:130] > crio version 1.34.3
	I1213 10:29:44.715400  941476 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1213 10:29:44.715407  941476 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1213 10:29:44.715412  941476 command_runner.go:130] >    GitTreeState:   dirty
	I1213 10:29:44.715417  941476 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1213 10:29:44.715422  941476 command_runner.go:130] >    GoVersion:      go1.24.6
	I1213 10:29:44.715435  941476 command_runner.go:130] >    Compiler:       gc
	I1213 10:29:44.715442  941476 command_runner.go:130] >    Platform:       linux/arm64
	I1213 10:29:44.715446  941476 command_runner.go:130] >    Linkmode:       static
	I1213 10:29:44.715453  941476 command_runner.go:130] >    BuildTags:
	I1213 10:29:44.715457  941476 command_runner.go:130] >      static
	I1213 10:29:44.715461  941476 command_runner.go:130] >      netgo
	I1213 10:29:44.715464  941476 command_runner.go:130] >      osusergo
	I1213 10:29:44.715476  941476 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1213 10:29:44.715480  941476 command_runner.go:130] >      seccomp
	I1213 10:29:44.715484  941476 command_runner.go:130] >      apparmor
	I1213 10:29:44.715492  941476 command_runner.go:130] >      selinux
	I1213 10:29:44.715496  941476 command_runner.go:130] >    LDFlags:          unknown
	I1213 10:29:44.715504  941476 command_runner.go:130] >    SeccompEnabled:   true
	I1213 10:29:44.715508  941476 command_runner.go:130] >    AppArmorEnabled:  false
	I1213 10:29:44.717596  941476 ssh_runner.go:195] Run: crio --version
	I1213 10:29:44.744267  941476 command_runner.go:130] > crio version 1.34.3
	I1213 10:29:44.744305  941476 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1213 10:29:44.744312  941476 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1213 10:29:44.744317  941476 command_runner.go:130] >    GitTreeState:   dirty
	I1213 10:29:44.744322  941476 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1213 10:29:44.744327  941476 command_runner.go:130] >    GoVersion:      go1.24.6
	I1213 10:29:44.744331  941476 command_runner.go:130] >    Compiler:       gc
	I1213 10:29:44.744337  941476 command_runner.go:130] >    Platform:       linux/arm64
	I1213 10:29:44.744341  941476 command_runner.go:130] >    Linkmode:       static
	I1213 10:29:44.744346  941476 command_runner.go:130] >    BuildTags:
	I1213 10:29:44.744350  941476 command_runner.go:130] >      static
	I1213 10:29:44.744376  941476 command_runner.go:130] >      netgo
	I1213 10:29:44.744385  941476 command_runner.go:130] >      osusergo
	I1213 10:29:44.744390  941476 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1213 10:29:44.744393  941476 command_runner.go:130] >      seccomp
	I1213 10:29:44.744397  941476 command_runner.go:130] >      apparmor
	I1213 10:29:44.744406  941476 command_runner.go:130] >      selinux
	I1213 10:29:44.744411  941476 command_runner.go:130] >    LDFlags:          unknown
	I1213 10:29:44.744419  941476 command_runner.go:130] >    SeccompEnabled:   true
	I1213 10:29:44.744424  941476 command_runner.go:130] >    AppArmorEnabled:  false
	I1213 10:29:44.751529  941476 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1213 10:29:44.754410  941476 cli_runner.go:164] Run: docker network inspect functional-200955 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 10:29:44.770603  941476 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1213 10:29:44.774419  941476 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1213 10:29:44.774622  941476 kubeadm.go:884] updating cluster {Name:functional-200955 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-200955 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 10:29:44.774752  941476 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 10:29:44.774840  941476 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 10:29:44.811833  941476 command_runner.go:130] > {
	I1213 10:29:44.811851  941476 command_runner.go:130] >   "images":  [
	I1213 10:29:44.811855  941476 command_runner.go:130] >     {
	I1213 10:29:44.811864  941476 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1213 10:29:44.811869  941476 command_runner.go:130] >       "repoTags":  [
	I1213 10:29:44.811875  941476 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1213 10:29:44.811879  941476 command_runner.go:130] >       ],
	I1213 10:29:44.811883  941476 command_runner.go:130] >       "repoDigests":  [
	I1213 10:29:44.811892  941476 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1213 10:29:44.811900  941476 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1213 10:29:44.811904  941476 command_runner.go:130] >       ],
	I1213 10:29:44.811908  941476 command_runner.go:130] >       "size":  "111333938",
	I1213 10:29:44.811912  941476 command_runner.go:130] >       "username":  "",
	I1213 10:29:44.811920  941476 command_runner.go:130] >       "pinned":  false
	I1213 10:29:44.811923  941476 command_runner.go:130] >     },
	I1213 10:29:44.811927  941476 command_runner.go:130] >     {
	I1213 10:29:44.811933  941476 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1213 10:29:44.811938  941476 command_runner.go:130] >       "repoTags":  [
	I1213 10:29:44.811944  941476 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1213 10:29:44.811947  941476 command_runner.go:130] >       ],
	I1213 10:29:44.811951  941476 command_runner.go:130] >       "repoDigests":  [
	I1213 10:29:44.811959  941476 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1213 10:29:44.811968  941476 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1213 10:29:44.811980  941476 command_runner.go:130] >       ],
	I1213 10:29:44.811984  941476 command_runner.go:130] >       "size":  "29037500",
	I1213 10:29:44.811988  941476 command_runner.go:130] >       "username":  "",
	I1213 10:29:44.811994  941476 command_runner.go:130] >       "pinned":  false
	I1213 10:29:44.811997  941476 command_runner.go:130] >     },
	I1213 10:29:44.812000  941476 command_runner.go:130] >     {
	I1213 10:29:44.812007  941476 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1213 10:29:44.812011  941476 command_runner.go:130] >       "repoTags":  [
	I1213 10:29:44.812017  941476 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1213 10:29:44.812020  941476 command_runner.go:130] >       ],
	I1213 10:29:44.812024  941476 command_runner.go:130] >       "repoDigests":  [
	I1213 10:29:44.812032  941476 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1213 10:29:44.812040  941476 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1213 10:29:44.812047  941476 command_runner.go:130] >       ],
	I1213 10:29:44.812051  941476 command_runner.go:130] >       "size":  "74491780",
	I1213 10:29:44.812056  941476 command_runner.go:130] >       "username":  "nonroot",
	I1213 10:29:44.812059  941476 command_runner.go:130] >       "pinned":  false
	I1213 10:29:44.812062  941476 command_runner.go:130] >     },
	I1213 10:29:44.812066  941476 command_runner.go:130] >     {
	I1213 10:29:44.812073  941476 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1213 10:29:44.812076  941476 command_runner.go:130] >       "repoTags":  [
	I1213 10:29:44.812081  941476 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1213 10:29:44.812085  941476 command_runner.go:130] >       ],
	I1213 10:29:44.812089  941476 command_runner.go:130] >       "repoDigests":  [
	I1213 10:29:44.812097  941476 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1213 10:29:44.812104  941476 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1213 10:29:44.812109  941476 command_runner.go:130] >       ],
	I1213 10:29:44.812113  941476 command_runner.go:130] >       "size":  "60857170",
	I1213 10:29:44.812116  941476 command_runner.go:130] >       "uid":  {
	I1213 10:29:44.812120  941476 command_runner.go:130] >         "value":  "0"
	I1213 10:29:44.812123  941476 command_runner.go:130] >       },
	I1213 10:29:44.812132  941476 command_runner.go:130] >       "username":  "",
	I1213 10:29:44.812136  941476 command_runner.go:130] >       "pinned":  false
	I1213 10:29:44.812143  941476 command_runner.go:130] >     },
	I1213 10:29:44.812146  941476 command_runner.go:130] >     {
	I1213 10:29:44.812152  941476 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1213 10:29:44.812156  941476 command_runner.go:130] >       "repoTags":  [
	I1213 10:29:44.812161  941476 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1213 10:29:44.812164  941476 command_runner.go:130] >       ],
	I1213 10:29:44.812168  941476 command_runner.go:130] >       "repoDigests":  [
	I1213 10:29:44.812176  941476 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1213 10:29:44.812184  941476 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1213 10:29:44.812187  941476 command_runner.go:130] >       ],
	I1213 10:29:44.812191  941476 command_runner.go:130] >       "size":  "84949999",
	I1213 10:29:44.812195  941476 command_runner.go:130] >       "uid":  {
	I1213 10:29:44.812198  941476 command_runner.go:130] >         "value":  "0"
	I1213 10:29:44.812201  941476 command_runner.go:130] >       },
	I1213 10:29:44.812204  941476 command_runner.go:130] >       "username":  "",
	I1213 10:29:44.812208  941476 command_runner.go:130] >       "pinned":  false
	I1213 10:29:44.812211  941476 command_runner.go:130] >     },
	I1213 10:29:44.812213  941476 command_runner.go:130] >     {
	I1213 10:29:44.812220  941476 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1213 10:29:44.812224  941476 command_runner.go:130] >       "repoTags":  [
	I1213 10:29:44.812230  941476 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1213 10:29:44.812233  941476 command_runner.go:130] >       ],
	I1213 10:29:44.812236  941476 command_runner.go:130] >       "repoDigests":  [
	I1213 10:29:44.812244  941476 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1213 10:29:44.812253  941476 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1213 10:29:44.812256  941476 command_runner.go:130] >       ],
	I1213 10:29:44.812259  941476 command_runner.go:130] >       "size":  "72170325",
	I1213 10:29:44.812263  941476 command_runner.go:130] >       "uid":  {
	I1213 10:29:44.812266  941476 command_runner.go:130] >         "value":  "0"
	I1213 10:29:44.812269  941476 command_runner.go:130] >       },
	I1213 10:29:44.812273  941476 command_runner.go:130] >       "username":  "",
	I1213 10:29:44.812277  941476 command_runner.go:130] >       "pinned":  false
	I1213 10:29:44.812280  941476 command_runner.go:130] >     },
	I1213 10:29:44.812286  941476 command_runner.go:130] >     {
	I1213 10:29:44.812293  941476 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1213 10:29:44.812296  941476 command_runner.go:130] >       "repoTags":  [
	I1213 10:29:44.812302  941476 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1213 10:29:44.812304  941476 command_runner.go:130] >       ],
	I1213 10:29:44.812308  941476 command_runner.go:130] >       "repoDigests":  [
	I1213 10:29:44.812316  941476 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1213 10:29:44.812323  941476 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1213 10:29:44.812326  941476 command_runner.go:130] >       ],
	I1213 10:29:44.812330  941476 command_runner.go:130] >       "size":  "74106775",
	I1213 10:29:44.812334  941476 command_runner.go:130] >       "username":  "",
	I1213 10:29:44.812337  941476 command_runner.go:130] >       "pinned":  false
	I1213 10:29:44.812340  941476 command_runner.go:130] >     },
	I1213 10:29:44.812343  941476 command_runner.go:130] >     {
	I1213 10:29:44.812349  941476 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1213 10:29:44.812353  941476 command_runner.go:130] >       "repoTags":  [
	I1213 10:29:44.812358  941476 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1213 10:29:44.812361  941476 command_runner.go:130] >       ],
	I1213 10:29:44.812364  941476 command_runner.go:130] >       "repoDigests":  [
	I1213 10:29:44.812372  941476 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1213 10:29:44.812390  941476 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1213 10:29:44.812393  941476 command_runner.go:130] >       ],
	I1213 10:29:44.812397  941476 command_runner.go:130] >       "size":  "49822549",
	I1213 10:29:44.812400  941476 command_runner.go:130] >       "uid":  {
	I1213 10:29:44.812405  941476 command_runner.go:130] >         "value":  "0"
	I1213 10:29:44.812408  941476 command_runner.go:130] >       },
	I1213 10:29:44.812412  941476 command_runner.go:130] >       "username":  "",
	I1213 10:29:44.812416  941476 command_runner.go:130] >       "pinned":  false
	I1213 10:29:44.812419  941476 command_runner.go:130] >     },
	I1213 10:29:44.812422  941476 command_runner.go:130] >     {
	I1213 10:29:44.812428  941476 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1213 10:29:44.812432  941476 command_runner.go:130] >       "repoTags":  [
	I1213 10:29:44.812436  941476 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1213 10:29:44.812442  941476 command_runner.go:130] >       ],
	I1213 10:29:44.812446  941476 command_runner.go:130] >       "repoDigests":  [
	I1213 10:29:44.812454  941476 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1213 10:29:44.812462  941476 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1213 10:29:44.812464  941476 command_runner.go:130] >       ],
	I1213 10:29:44.812468  941476 command_runner.go:130] >       "size":  "519884",
	I1213 10:29:44.812471  941476 command_runner.go:130] >       "uid":  {
	I1213 10:29:44.812475  941476 command_runner.go:130] >         "value":  "65535"
	I1213 10:29:44.812478  941476 command_runner.go:130] >       },
	I1213 10:29:44.812482  941476 command_runner.go:130] >       "username":  "",
	I1213 10:29:44.812485  941476 command_runner.go:130] >       "pinned":  true
	I1213 10:29:44.812488  941476 command_runner.go:130] >     }
	I1213 10:29:44.812491  941476 command_runner.go:130] >   ]
	I1213 10:29:44.812494  941476 command_runner.go:130] > }
	I1213 10:29:44.812656  941476 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 10:29:44.812664  941476 crio.go:433] Images already preloaded, skipping extraction
	I1213 10:29:44.812720  941476 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 10:29:44.834840  941476 command_runner.go:130] > {
	I1213 10:29:44.834859  941476 command_runner.go:130] >   "images":  [
	I1213 10:29:44.834863  941476 command_runner.go:130] >     {
	I1213 10:29:44.834871  941476 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1213 10:29:44.834878  941476 command_runner.go:130] >       "repoTags":  [
	I1213 10:29:44.834893  941476 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1213 10:29:44.834897  941476 command_runner.go:130] >       ],
	I1213 10:29:44.834903  941476 command_runner.go:130] >       "repoDigests":  [
	I1213 10:29:44.834913  941476 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1213 10:29:44.834921  941476 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1213 10:29:44.834924  941476 command_runner.go:130] >       ],
	I1213 10:29:44.834928  941476 command_runner.go:130] >       "size":  "111333938",
	I1213 10:29:44.834932  941476 command_runner.go:130] >       "username":  "",
	I1213 10:29:44.834941  941476 command_runner.go:130] >       "pinned":  false
	I1213 10:29:44.834944  941476 command_runner.go:130] >     },
	I1213 10:29:44.834947  941476 command_runner.go:130] >     {
	I1213 10:29:44.834953  941476 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1213 10:29:44.834957  941476 command_runner.go:130] >       "repoTags":  [
	I1213 10:29:44.834962  941476 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1213 10:29:44.834965  941476 command_runner.go:130] >       ],
	I1213 10:29:44.834969  941476 command_runner.go:130] >       "repoDigests":  [
	I1213 10:29:44.834977  941476 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1213 10:29:44.834986  941476 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1213 10:29:44.834989  941476 command_runner.go:130] >       ],
	I1213 10:29:44.834993  941476 command_runner.go:130] >       "size":  "29037500",
	I1213 10:29:44.834997  941476 command_runner.go:130] >       "username":  "",
	I1213 10:29:44.835006  941476 command_runner.go:130] >       "pinned":  false
	I1213 10:29:44.835009  941476 command_runner.go:130] >     },
	I1213 10:29:44.835013  941476 command_runner.go:130] >     {
	I1213 10:29:44.835019  941476 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1213 10:29:44.835023  941476 command_runner.go:130] >       "repoTags":  [
	I1213 10:29:44.835028  941476 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1213 10:29:44.835032  941476 command_runner.go:130] >       ],
	I1213 10:29:44.835036  941476 command_runner.go:130] >       "repoDigests":  [
	I1213 10:29:44.835044  941476 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1213 10:29:44.835052  941476 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1213 10:29:44.835055  941476 command_runner.go:130] >       ],
	I1213 10:29:44.835058  941476 command_runner.go:130] >       "size":  "74491780",
	I1213 10:29:44.835062  941476 command_runner.go:130] >       "username":  "nonroot",
	I1213 10:29:44.835066  941476 command_runner.go:130] >       "pinned":  false
	I1213 10:29:44.835069  941476 command_runner.go:130] >     },
	I1213 10:29:44.835073  941476 command_runner.go:130] >     {
	I1213 10:29:44.835080  941476 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1213 10:29:44.835083  941476 command_runner.go:130] >       "repoTags":  [
	I1213 10:29:44.835088  941476 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1213 10:29:44.835093  941476 command_runner.go:130] >       ],
	I1213 10:29:44.835100  941476 command_runner.go:130] >       "repoDigests":  [
	I1213 10:29:44.835108  941476 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1213 10:29:44.835116  941476 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1213 10:29:44.835119  941476 command_runner.go:130] >       ],
	I1213 10:29:44.835123  941476 command_runner.go:130] >       "size":  "60857170",
	I1213 10:29:44.835127  941476 command_runner.go:130] >       "uid":  {
	I1213 10:29:44.835131  941476 command_runner.go:130] >         "value":  "0"
	I1213 10:29:44.835134  941476 command_runner.go:130] >       },
	I1213 10:29:44.835147  941476 command_runner.go:130] >       "username":  "",
	I1213 10:29:44.835151  941476 command_runner.go:130] >       "pinned":  false
	I1213 10:29:44.835154  941476 command_runner.go:130] >     },
	I1213 10:29:44.835157  941476 command_runner.go:130] >     {
	I1213 10:29:44.835163  941476 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1213 10:29:44.835167  941476 command_runner.go:130] >       "repoTags":  [
	I1213 10:29:44.835172  941476 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1213 10:29:44.835175  941476 command_runner.go:130] >       ],
	I1213 10:29:44.835179  941476 command_runner.go:130] >       "repoDigests":  [
	I1213 10:29:44.835187  941476 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1213 10:29:44.835195  941476 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1213 10:29:44.835197  941476 command_runner.go:130] >       ],
	I1213 10:29:44.835201  941476 command_runner.go:130] >       "size":  "84949999",
	I1213 10:29:44.835205  941476 command_runner.go:130] >       "uid":  {
	I1213 10:29:44.835209  941476 command_runner.go:130] >         "value":  "0"
	I1213 10:29:44.835212  941476 command_runner.go:130] >       },
	I1213 10:29:44.835215  941476 command_runner.go:130] >       "username":  "",
	I1213 10:29:44.835219  941476 command_runner.go:130] >       "pinned":  false
	I1213 10:29:44.835222  941476 command_runner.go:130] >     },
	I1213 10:29:44.835224  941476 command_runner.go:130] >     {
	I1213 10:29:44.835231  941476 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1213 10:29:44.835234  941476 command_runner.go:130] >       "repoTags":  [
	I1213 10:29:44.835240  941476 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1213 10:29:44.835243  941476 command_runner.go:130] >       ],
	I1213 10:29:44.835247  941476 command_runner.go:130] >       "repoDigests":  [
	I1213 10:29:44.835261  941476 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1213 10:29:44.835270  941476 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1213 10:29:44.835273  941476 command_runner.go:130] >       ],
	I1213 10:29:44.835277  941476 command_runner.go:130] >       "size":  "72170325",
	I1213 10:29:44.835281  941476 command_runner.go:130] >       "uid":  {
	I1213 10:29:44.835285  941476 command_runner.go:130] >         "value":  "0"
	I1213 10:29:44.835288  941476 command_runner.go:130] >       },
	I1213 10:29:44.835292  941476 command_runner.go:130] >       "username":  "",
	I1213 10:29:44.835295  941476 command_runner.go:130] >       "pinned":  false
	I1213 10:29:44.835298  941476 command_runner.go:130] >     },
	I1213 10:29:44.835302  941476 command_runner.go:130] >     {
	I1213 10:29:44.835309  941476 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1213 10:29:44.835312  941476 command_runner.go:130] >       "repoTags":  [
	I1213 10:29:44.835318  941476 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1213 10:29:44.835320  941476 command_runner.go:130] >       ],
	I1213 10:29:44.835324  941476 command_runner.go:130] >       "repoDigests":  [
	I1213 10:29:44.835332  941476 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1213 10:29:44.835340  941476 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1213 10:29:44.835343  941476 command_runner.go:130] >       ],
	I1213 10:29:44.835347  941476 command_runner.go:130] >       "size":  "74106775",
	I1213 10:29:44.835351  941476 command_runner.go:130] >       "username":  "",
	I1213 10:29:44.835355  941476 command_runner.go:130] >       "pinned":  false
	I1213 10:29:44.835358  941476 command_runner.go:130] >     },
	I1213 10:29:44.835361  941476 command_runner.go:130] >     {
	I1213 10:29:44.835367  941476 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1213 10:29:44.835371  941476 command_runner.go:130] >       "repoTags":  [
	I1213 10:29:44.835376  941476 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1213 10:29:44.835379  941476 command_runner.go:130] >       ],
	I1213 10:29:44.835383  941476 command_runner.go:130] >       "repoDigests":  [
	I1213 10:29:44.835390  941476 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1213 10:29:44.835407  941476 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1213 10:29:44.835411  941476 command_runner.go:130] >       ],
	I1213 10:29:44.835415  941476 command_runner.go:130] >       "size":  "49822549",
	I1213 10:29:44.835422  941476 command_runner.go:130] >       "uid":  {
	I1213 10:29:44.835426  941476 command_runner.go:130] >         "value":  "0"
	I1213 10:29:44.835429  941476 command_runner.go:130] >       },
	I1213 10:29:44.835433  941476 command_runner.go:130] >       "username":  "",
	I1213 10:29:44.835436  941476 command_runner.go:130] >       "pinned":  false
	I1213 10:29:44.835439  941476 command_runner.go:130] >     },
	I1213 10:29:44.835442  941476 command_runner.go:130] >     {
	I1213 10:29:44.835449  941476 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1213 10:29:44.835452  941476 command_runner.go:130] >       "repoTags":  [
	I1213 10:29:44.835457  941476 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1213 10:29:44.835460  941476 command_runner.go:130] >       ],
	I1213 10:29:44.835463  941476 command_runner.go:130] >       "repoDigests":  [
	I1213 10:29:44.835470  941476 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1213 10:29:44.835478  941476 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1213 10:29:44.835481  941476 command_runner.go:130] >       ],
	I1213 10:29:44.835485  941476 command_runner.go:130] >       "size":  "519884",
	I1213 10:29:44.835489  941476 command_runner.go:130] >       "uid":  {
	I1213 10:29:44.835492  941476 command_runner.go:130] >         "value":  "65535"
	I1213 10:29:44.835495  941476 command_runner.go:130] >       },
	I1213 10:29:44.835499  941476 command_runner.go:130] >       "username":  "",
	I1213 10:29:44.835503  941476 command_runner.go:130] >       "pinned":  true
	I1213 10:29:44.835506  941476 command_runner.go:130] >     }
	I1213 10:29:44.835508  941476 command_runner.go:130] >   ]
	I1213 10:29:44.835512  941476 command_runner.go:130] > }
	I1213 10:29:44.838144  941476 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 10:29:44.838206  941476 cache_images.go:86] Images are preloaded, skipping loading
	I1213 10:29:44.838219  941476 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1213 10:29:44.838324  941476 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-200955 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-200955 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 10:29:44.838426  941476 ssh_runner.go:195] Run: crio config
	I1213 10:29:44.886075  941476 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1213 10:29:44.886098  941476 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1213 10:29:44.886106  941476 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1213 10:29:44.886110  941476 command_runner.go:130] > #
	I1213 10:29:44.886117  941476 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1213 10:29:44.886124  941476 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1213 10:29:44.886130  941476 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1213 10:29:44.886139  941476 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1213 10:29:44.886142  941476 command_runner.go:130] > # reload'.
	I1213 10:29:44.886162  941476 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1213 10:29:44.886169  941476 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1213 10:29:44.886175  941476 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1213 10:29:44.886181  941476 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1213 10:29:44.886184  941476 command_runner.go:130] > [crio]
	I1213 10:29:44.886190  941476 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1213 10:29:44.886195  941476 command_runner.go:130] > # containers images, in this directory.
	I1213 10:29:44.886932  941476 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1213 10:29:44.886948  941476 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1213 10:29:44.887520  941476 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1213 10:29:44.887536  941476 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1213 10:29:44.887990  941476 command_runner.go:130] > # imagestore = ""
	I1213 10:29:44.888002  941476 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1213 10:29:44.888019  941476 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1213 10:29:44.888390  941476 command_runner.go:130] > # storage_driver = "overlay"
	I1213 10:29:44.888402  941476 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1213 10:29:44.888409  941476 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1213 10:29:44.888578  941476 command_runner.go:130] > # storage_option = [
	I1213 10:29:44.888743  941476 command_runner.go:130] > # ]
	I1213 10:29:44.888754  941476 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1213 10:29:44.888761  941476 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1213 10:29:44.888765  941476 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1213 10:29:44.888771  941476 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1213 10:29:44.888787  941476 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1213 10:29:44.888792  941476 command_runner.go:130] > # always happen on a node reboot
	I1213 10:29:44.888797  941476 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1213 10:29:44.888807  941476 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1213 10:29:44.888813  941476 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1213 10:29:44.888818  941476 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1213 10:29:44.888822  941476 command_runner.go:130] > # version_file_persist = ""
	I1213 10:29:44.888829  941476 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1213 10:29:44.888839  941476 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1213 10:29:44.888843  941476 command_runner.go:130] > # internal_wipe = true
	I1213 10:29:44.888851  941476 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1213 10:29:44.888856  941476 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1213 10:29:44.888860  941476 command_runner.go:130] > # internal_repair = true
	I1213 10:29:44.888869  941476 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1213 10:29:44.888875  941476 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1213 10:29:44.888881  941476 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1213 10:29:44.888886  941476 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1213 10:29:44.888892  941476 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1213 10:29:44.888895  941476 command_runner.go:130] > [crio.api]
	I1213 10:29:44.888901  941476 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1213 10:29:44.888905  941476 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1213 10:29:44.888910  941476 command_runner.go:130] > # IP address on which the stream server will listen.
	I1213 10:29:44.888914  941476 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1213 10:29:44.888921  941476 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1213 10:29:44.888926  941476 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1213 10:29:44.888929  941476 command_runner.go:130] > # stream_port = "0"
	I1213 10:29:44.888934  941476 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1213 10:29:44.888938  941476 command_runner.go:130] > # stream_enable_tls = false
	I1213 10:29:44.888944  941476 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1213 10:29:44.889110  941476 command_runner.go:130] > # stream_idle_timeout = ""
	I1213 10:29:44.889121  941476 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1213 10:29:44.889127  941476 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1213 10:29:44.889131  941476 command_runner.go:130] > # stream_tls_cert = ""
	I1213 10:29:44.889137  941476 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1213 10:29:44.889143  941476 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1213 10:29:44.889156  941476 command_runner.go:130] > # stream_tls_key = ""
	I1213 10:29:44.889162  941476 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1213 10:29:44.889169  941476 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1213 10:29:44.889177  941476 command_runner.go:130] > # automatically pick up the changes.
	I1213 10:29:44.889181  941476 command_runner.go:130] > # stream_tls_ca = ""
	I1213 10:29:44.889197  941476 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1213 10:29:44.889202  941476 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1213 10:29:44.889209  941476 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1213 10:29:44.889214  941476 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1213 10:29:44.889220  941476 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1213 10:29:44.889225  941476 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1213 10:29:44.889229  941476 command_runner.go:130] > [crio.runtime]
	I1213 10:29:44.889235  941476 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1213 10:29:44.889240  941476 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1213 10:29:44.889244  941476 command_runner.go:130] > # "nofile=1024:2048"
	I1213 10:29:44.889253  941476 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1213 10:29:44.889257  941476 command_runner.go:130] > # default_ulimits = [
	I1213 10:29:44.889260  941476 command_runner.go:130] > # ]
	I1213 10:29:44.889265  941476 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1213 10:29:44.889269  941476 command_runner.go:130] > # no_pivot = false
	I1213 10:29:44.889274  941476 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1213 10:29:44.889280  941476 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1213 10:29:44.889285  941476 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1213 10:29:44.889291  941476 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1213 10:29:44.889296  941476 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1213 10:29:44.889318  941476 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1213 10:29:44.889322  941476 command_runner.go:130] > # conmon = ""
	I1213 10:29:44.889327  941476 command_runner.go:130] > # Cgroup setting for conmon
	I1213 10:29:44.889333  941476 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1213 10:29:44.889512  941476 command_runner.go:130] > conmon_cgroup = "pod"
	I1213 10:29:44.889563  941476 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1213 10:29:44.889585  941476 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1213 10:29:44.889610  941476 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1213 10:29:44.889647  941476 command_runner.go:130] > # conmon_env = [
	I1213 10:29:44.889671  941476 command_runner.go:130] > # ]
	I1213 10:29:44.889696  941476 command_runner.go:130] > # Additional environment variables to set for all the
	I1213 10:29:44.889721  941476 command_runner.go:130] > # containers. These are overridden if set in the
	I1213 10:29:44.889753  941476 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1213 10:29:44.889776  941476 command_runner.go:130] > # default_env = [
	I1213 10:29:44.889797  941476 command_runner.go:130] > # ]
	I1213 10:29:44.889822  941476 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1213 10:29:44.889858  941476 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1213 10:29:44.889885  941476 command_runner.go:130] > # selinux = false
	I1213 10:29:44.889906  941476 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1213 10:29:44.889932  941476 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1213 10:29:44.889962  941476 command_runner.go:130] > # This option supports live configuration reload.
	I1213 10:29:44.889985  941476 command_runner.go:130] > # seccomp_profile = ""
	I1213 10:29:44.890009  941476 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1213 10:29:44.890029  941476 command_runner.go:130] > # This option supports live configuration reload.
	I1213 10:29:44.890061  941476 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1213 10:29:44.890087  941476 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1213 10:29:44.890109  941476 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1213 10:29:44.890133  941476 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1213 10:29:44.890166  941476 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1213 10:29:44.890191  941476 command_runner.go:130] > # This option supports live configuration reload.
	I1213 10:29:44.890212  941476 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1213 10:29:44.890236  941476 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1213 10:29:44.890284  941476 command_runner.go:130] > # the cgroup blockio controller.
	I1213 10:29:44.890307  941476 command_runner.go:130] > # blockio_config_file = ""
	I1213 10:29:44.890329  941476 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1213 10:29:44.890350  941476 command_runner.go:130] > # blockio parameters.
	I1213 10:29:44.890409  941476 command_runner.go:130] > # blockio_reload = false
	I1213 10:29:44.890437  941476 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1213 10:29:44.890458  941476 command_runner.go:130] > # irqbalance daemon.
	I1213 10:29:44.890483  941476 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1213 10:29:44.890515  941476 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1213 10:29:44.890551  941476 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1213 10:29:44.890575  941476 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1213 10:29:44.890599  941476 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1213 10:29:44.890631  941476 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1213 10:29:44.890655  941476 command_runner.go:130] > # This option supports live configuration reload.
	I1213 10:29:44.890676  941476 command_runner.go:130] > # rdt_config_file = ""
	I1213 10:29:44.890716  941476 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1213 10:29:44.890743  941476 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1213 10:29:44.890767  941476 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1213 10:29:44.890788  941476 command_runner.go:130] > # separate_pull_cgroup = ""
	I1213 10:29:44.890824  941476 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1213 10:29:44.890863  941476 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1213 10:29:44.890886  941476 command_runner.go:130] > # will be added.
	I1213 10:29:44.890904  941476 command_runner.go:130] > # default_capabilities = [
	I1213 10:29:44.890932  941476 command_runner.go:130] > # 	"CHOWN",
	I1213 10:29:44.890957  941476 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1213 10:29:44.891256  941476 command_runner.go:130] > # 	"FSETID",
	I1213 10:29:44.891291  941476 command_runner.go:130] > # 	"FOWNER",
	I1213 10:29:44.891318  941476 command_runner.go:130] > # 	"SETGID",
	I1213 10:29:44.891335  941476 command_runner.go:130] > # 	"SETUID",
	I1213 10:29:44.891390  941476 command_runner.go:130] > # 	"SETPCAP",
	I1213 10:29:44.891416  941476 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1213 10:29:44.891438  941476 command_runner.go:130] > # 	"KILL",
	I1213 10:29:44.891461  941476 command_runner.go:130] > # ]
	I1213 10:29:44.891498  941476 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1213 10:29:44.891527  941476 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1213 10:29:44.891550  941476 command_runner.go:130] > # add_inheritable_capabilities = false
	I1213 10:29:44.891572  941476 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1213 10:29:44.891606  941476 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1213 10:29:44.891629  941476 command_runner.go:130] > default_sysctls = [
	I1213 10:29:44.891651  941476 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1213 10:29:44.891671  941476 command_runner.go:130] > ]
	I1213 10:29:44.891705  941476 command_runner.go:130] > # List of devices on the host that a
	I1213 10:29:44.891730  941476 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1213 10:29:44.891749  941476 command_runner.go:130] > # allowed_devices = [
	I1213 10:29:44.891779  941476 command_runner.go:130] > # 	"/dev/fuse",
	I1213 10:29:44.891809  941476 command_runner.go:130] > # 	"/dev/net/tun",
	I1213 10:29:44.891834  941476 command_runner.go:130] > # ]
	I1213 10:29:44.891856  941476 command_runner.go:130] > # List of additional devices. specified as
	I1213 10:29:44.891880  941476 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1213 10:29:44.891914  941476 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1213 10:29:44.891940  941476 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1213 10:29:44.891962  941476 command_runner.go:130] > # additional_devices = [
	I1213 10:29:44.891983  941476 command_runner.go:130] > # ]
	I1213 10:29:44.892017  941476 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1213 10:29:44.892041  941476 command_runner.go:130] > # cdi_spec_dirs = [
	I1213 10:29:44.892063  941476 command_runner.go:130] > # 	"/etc/cdi",
	I1213 10:29:44.892082  941476 command_runner.go:130] > # 	"/var/run/cdi",
	I1213 10:29:44.892103  941476 command_runner.go:130] > # ]
	I1213 10:29:44.892139  941476 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1213 10:29:44.892161  941476 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1213 10:29:44.892183  941476 command_runner.go:130] > # Defaults to false.
	I1213 10:29:44.892215  941476 command_runner.go:130] > # device_ownership_from_security_context = false
	I1213 10:29:44.892243  941476 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1213 10:29:44.892267  941476 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1213 10:29:44.892287  941476 command_runner.go:130] > # hooks_dir = [
	I1213 10:29:44.892324  941476 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1213 10:29:44.892349  941476 command_runner.go:130] > # ]
	I1213 10:29:44.892371  941476 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1213 10:29:44.892394  941476 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1213 10:29:44.892427  941476 command_runner.go:130] > # its default mounts from the following two files:
	I1213 10:29:44.892450  941476 command_runner.go:130] > #
	I1213 10:29:44.892472  941476 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1213 10:29:44.892496  941476 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1213 10:29:44.892529  941476 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1213 10:29:44.892555  941476 command_runner.go:130] > #
	I1213 10:29:44.892582  941476 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1213 10:29:44.892608  941476 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1213 10:29:44.892654  941476 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1213 10:29:44.892680  941476 command_runner.go:130] > #      only add mounts it finds in this file.
	I1213 10:29:44.892700  941476 command_runner.go:130] > #
	I1213 10:29:44.892722  941476 command_runner.go:130] > # default_mounts_file = ""
	I1213 10:29:44.892742  941476 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1213 10:29:44.892779  941476 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1213 10:29:44.892797  941476 command_runner.go:130] > # pids_limit = -1
	I1213 10:29:44.892825  941476 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1213 10:29:44.892860  941476 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1213 10:29:44.892886  941476 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1213 10:29:44.892912  941476 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1213 10:29:44.892937  941476 command_runner.go:130] > # log_size_max = -1
	I1213 10:29:44.892967  941476 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1213 10:29:44.892992  941476 command_runner.go:130] > # log_to_journald = false
	I1213 10:29:44.893016  941476 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1213 10:29:44.893040  941476 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1213 10:29:44.893073  941476 command_runner.go:130] > # Path to directory for container attach sockets.
	I1213 10:29:44.893097  941476 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1213 10:29:44.893118  941476 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1213 10:29:44.893142  941476 command_runner.go:130] > # bind_mount_prefix = ""
	I1213 10:29:44.893174  941476 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1213 10:29:44.893198  941476 command_runner.go:130] > # read_only = false
	I1213 10:29:44.893223  941476 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1213 10:29:44.893245  941476 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1213 10:29:44.893278  941476 command_runner.go:130] > # live configuration reload.
	I1213 10:29:44.893302  941476 command_runner.go:130] > # log_level = "info"
	I1213 10:29:44.893331  941476 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1213 10:29:44.893353  941476 command_runner.go:130] > # This option supports live configuration reload.
	I1213 10:29:44.893380  941476 command_runner.go:130] > # log_filter = ""
	I1213 10:29:44.893406  941476 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1213 10:29:44.893430  941476 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1213 10:29:44.893452  941476 command_runner.go:130] > # separated by comma.
	I1213 10:29:44.893486  941476 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1213 10:29:44.893520  941476 command_runner.go:130] > # uid_mappings = ""
	I1213 10:29:44.893564  941476 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1213 10:29:44.893593  941476 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1213 10:29:44.893617  941476 command_runner.go:130] > # separated by comma.
	I1213 10:29:44.893643  941476 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1213 10:29:44.893997  941476 command_runner.go:130] > # gid_mappings = ""
	I1213 10:29:44.894010  941476 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1213 10:29:44.894017  941476 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1213 10:29:44.894024  941476 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1213 10:29:44.894032  941476 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1213 10:29:44.894037  941476 command_runner.go:130] > # minimum_mappable_uid = -1
	I1213 10:29:44.894043  941476 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1213 10:29:44.894050  941476 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1213 10:29:44.894056  941476 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1213 10:29:44.894064  941476 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1213 10:29:44.894068  941476 command_runner.go:130] > # minimum_mappable_gid = -1
	I1213 10:29:44.894074  941476 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1213 10:29:44.894081  941476 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1213 10:29:44.894086  941476 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1213 10:29:44.894090  941476 command_runner.go:130] > # ctr_stop_timeout = 30
	I1213 10:29:44.894096  941476 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1213 10:29:44.894102  941476 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1213 10:29:44.894107  941476 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1213 10:29:44.894111  941476 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1213 10:29:44.894115  941476 command_runner.go:130] > # drop_infra_ctr = true
	I1213 10:29:44.894121  941476 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1213 10:29:44.894127  941476 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1213 10:29:44.894135  941476 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1213 10:29:44.894141  941476 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1213 10:29:44.894149  941476 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1213 10:29:44.894155  941476 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1213 10:29:44.894160  941476 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1213 10:29:44.894165  941476 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1213 10:29:44.894173  941476 command_runner.go:130] > # shared_cpuset = ""
	I1213 10:29:44.894179  941476 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1213 10:29:44.894184  941476 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1213 10:29:44.894188  941476 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1213 10:29:44.894195  941476 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1213 10:29:44.894199  941476 command_runner.go:130] > # pinns_path = ""
	I1213 10:29:44.894204  941476 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1213 10:29:44.894210  941476 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1213 10:29:44.894216  941476 command_runner.go:130] > # enable_criu_support = true
	I1213 10:29:44.894223  941476 command_runner.go:130] > # Enable/disable the generation of the container,
	I1213 10:29:44.894229  941476 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1213 10:29:44.894234  941476 command_runner.go:130] > # enable_pod_events = false
	I1213 10:29:44.894240  941476 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1213 10:29:44.894245  941476 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1213 10:29:44.894249  941476 command_runner.go:130] > # default_runtime = "crun"
	I1213 10:29:44.894254  941476 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1213 10:29:44.894261  941476 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1213 10:29:44.894271  941476 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1213 10:29:44.894276  941476 command_runner.go:130] > # creation as a file is not desired either.
	I1213 10:29:44.894284  941476 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1213 10:29:44.894289  941476 command_runner.go:130] > # the hostname is being managed dynamically.
	I1213 10:29:44.894293  941476 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1213 10:29:44.894297  941476 command_runner.go:130] > # ]
	I1213 10:29:44.894303  941476 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1213 10:29:44.894309  941476 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1213 10:29:44.894316  941476 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1213 10:29:44.894321  941476 command_runner.go:130] > # Each entry in the table should follow the format:
	I1213 10:29:44.894324  941476 command_runner.go:130] > #
	I1213 10:29:44.894329  941476 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1213 10:29:44.894333  941476 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1213 10:29:44.894337  941476 command_runner.go:130] > # runtime_type = "oci"
	I1213 10:29:44.894342  941476 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1213 10:29:44.894348  941476 command_runner.go:130] > # inherit_default_runtime = false
	I1213 10:29:44.894367  941476 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1213 10:29:44.894372  941476 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1213 10:29:44.894377  941476 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1213 10:29:44.894381  941476 command_runner.go:130] > # monitor_env = []
	I1213 10:29:44.894386  941476 command_runner.go:130] > # privileged_without_host_devices = false
	I1213 10:29:44.894390  941476 command_runner.go:130] > # allowed_annotations = []
	I1213 10:29:44.894395  941476 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1213 10:29:44.894399  941476 command_runner.go:130] > # no_sync_log = false
	I1213 10:29:44.894403  941476 command_runner.go:130] > # default_annotations = {}
	I1213 10:29:44.894407  941476 command_runner.go:130] > # stream_websockets = false
	I1213 10:29:44.894411  941476 command_runner.go:130] > # seccomp_profile = ""
	I1213 10:29:44.894442  941476 command_runner.go:130] > # Where:
	I1213 10:29:44.894448  941476 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1213 10:29:44.894454  941476 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1213 10:29:44.894461  941476 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1213 10:29:44.894468  941476 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1213 10:29:44.894471  941476 command_runner.go:130] > #   in $PATH.
	I1213 10:29:44.894478  941476 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1213 10:29:44.894482  941476 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1213 10:29:44.894488  941476 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1213 10:29:44.894492  941476 command_runner.go:130] > #   state.
	I1213 10:29:44.894498  941476 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1213 10:29:44.894504  941476 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1213 10:29:44.894510  941476 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1213 10:29:44.894516  941476 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1213 10:29:44.894521  941476 command_runner.go:130] > #   the values from the default runtime on load time.
	I1213 10:29:44.894527  941476 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1213 10:29:44.894533  941476 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1213 10:29:44.894539  941476 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1213 10:29:44.894545  941476 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1213 10:29:44.894550  941476 command_runner.go:130] > #   The currently recognized values are:
	I1213 10:29:44.894557  941476 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1213 10:29:44.894564  941476 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1213 10:29:44.894574  941476 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1213 10:29:44.894580  941476 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1213 10:29:44.894588  941476 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1213 10:29:44.894596  941476 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1213 10:29:44.894602  941476 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1213 10:29:44.894608  941476 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1213 10:29:44.894614  941476 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1213 10:29:44.894621  941476 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1213 10:29:44.894628  941476 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1213 10:29:44.894634  941476 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1213 10:29:44.894640  941476 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1213 10:29:44.894646  941476 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1213 10:29:44.894652  941476 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1213 10:29:44.894661  941476 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1213 10:29:44.894667  941476 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1213 10:29:44.894672  941476 command_runner.go:130] > #   deprecated option "conmon".
	I1213 10:29:44.894679  941476 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1213 10:29:44.894684  941476 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1213 10:29:44.894691  941476 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1213 10:29:44.894695  941476 command_runner.go:130] > #   should be moved to the container's cgroup
	I1213 10:29:44.894702  941476 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1213 10:29:44.894707  941476 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1213 10:29:44.894714  941476 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1213 10:29:44.894718  941476 command_runner.go:130] > #   conmon-rs by using:
	I1213 10:29:44.894726  941476 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1213 10:29:44.894734  941476 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1213 10:29:44.894742  941476 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1213 10:29:44.894748  941476 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1213 10:29:44.894753  941476 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1213 10:29:44.894760  941476 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1213 10:29:44.894768  941476 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1213 10:29:44.894774  941476 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1213 10:29:44.894782  941476 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1213 10:29:44.894794  941476 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1213 10:29:44.894798  941476 command_runner.go:130] > #   when a machine crash happens.
	I1213 10:29:44.894805  941476 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1213 10:29:44.894813  941476 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1213 10:29:44.894821  941476 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1213 10:29:44.894825  941476 command_runner.go:130] > #   seccomp profile for the runtime.
	I1213 10:29:44.894838  941476 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1213 10:29:44.894848  941476 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1213 10:29:44.894851  941476 command_runner.go:130] > #
	I1213 10:29:44.894855  941476 command_runner.go:130] > # Using the seccomp notifier feature:
	I1213 10:29:44.894859  941476 command_runner.go:130] > #
	I1213 10:29:44.894866  941476 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1213 10:29:44.894872  941476 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1213 10:29:44.894878  941476 command_runner.go:130] > #
	I1213 10:29:44.894887  941476 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1213 10:29:44.894893  941476 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1213 10:29:44.894896  941476 command_runner.go:130] > #
	I1213 10:29:44.894903  941476 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1213 10:29:44.894906  941476 command_runner.go:130] > # feature.
	I1213 10:29:44.894909  941476 command_runner.go:130] > #
	I1213 10:29:44.894914  941476 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1213 10:29:44.894921  941476 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1213 10:29:44.894927  941476 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1213 10:29:44.894933  941476 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1213 10:29:44.894939  941476 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1213 10:29:44.894942  941476 command_runner.go:130] > #
	I1213 10:29:44.894948  941476 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1213 10:29:44.894954  941476 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1213 10:29:44.894957  941476 command_runner.go:130] > #
	I1213 10:29:44.894963  941476 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1213 10:29:44.894968  941476 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1213 10:29:44.894971  941476 command_runner.go:130] > #
	I1213 10:29:44.894977  941476 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1213 10:29:44.894987  941476 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1213 10:29:44.894991  941476 command_runner.go:130] > # limitation.
	I1213 10:29:44.894995  941476 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1213 10:29:44.895000  941476 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1213 10:29:44.895004  941476 command_runner.go:130] > runtime_type = ""
	I1213 10:29:44.895008  941476 command_runner.go:130] > runtime_root = "/run/crun"
	I1213 10:29:44.895013  941476 command_runner.go:130] > inherit_default_runtime = false
	I1213 10:29:44.895016  941476 command_runner.go:130] > runtime_config_path = ""
	I1213 10:29:44.895020  941476 command_runner.go:130] > container_min_memory = ""
	I1213 10:29:44.895025  941476 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1213 10:29:44.895028  941476 command_runner.go:130] > monitor_cgroup = "pod"
	I1213 10:29:44.895032  941476 command_runner.go:130] > monitor_exec_cgroup = ""
	I1213 10:29:44.895036  941476 command_runner.go:130] > allowed_annotations = [
	I1213 10:29:44.895040  941476 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1213 10:29:44.895043  941476 command_runner.go:130] > ]
	I1213 10:29:44.895047  941476 command_runner.go:130] > privileged_without_host_devices = false
	I1213 10:29:44.895051  941476 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1213 10:29:44.895056  941476 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1213 10:29:44.895059  941476 command_runner.go:130] > runtime_type = ""
	I1213 10:29:44.895064  941476 command_runner.go:130] > runtime_root = "/run/runc"
	I1213 10:29:44.895069  941476 command_runner.go:130] > inherit_default_runtime = false
	I1213 10:29:44.895072  941476 command_runner.go:130] > runtime_config_path = ""
	I1213 10:29:44.895076  941476 command_runner.go:130] > container_min_memory = ""
	I1213 10:29:44.895081  941476 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1213 10:29:44.895084  941476 command_runner.go:130] > monitor_cgroup = "pod"
	I1213 10:29:44.895089  941476 command_runner.go:130] > monitor_exec_cgroup = ""
	I1213 10:29:44.895093  941476 command_runner.go:130] > privileged_without_host_devices = false
	I1213 10:29:44.895100  941476 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1213 10:29:44.895105  941476 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1213 10:29:44.895111  941476 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1213 10:29:44.895119  941476 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1213 10:29:44.895129  941476 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1213 10:29:44.895139  941476 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1213 10:29:44.895151  941476 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1213 10:29:44.895156  941476 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1213 10:29:44.895166  941476 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1213 10:29:44.895174  941476 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1213 10:29:44.895181  941476 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1213 10:29:44.895188  941476 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1213 10:29:44.895191  941476 command_runner.go:130] > # Example:
	I1213 10:29:44.895196  941476 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1213 10:29:44.895201  941476 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1213 10:29:44.895207  941476 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1213 10:29:44.895212  941476 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1213 10:29:44.895216  941476 command_runner.go:130] > # cpuset = "0-1"
	I1213 10:29:44.895219  941476 command_runner.go:130] > # cpushares = "5"
	I1213 10:29:44.895223  941476 command_runner.go:130] > # cpuquota = "1000"
	I1213 10:29:44.895227  941476 command_runner.go:130] > # cpuperiod = "100000"
	I1213 10:29:44.895230  941476 command_runner.go:130] > # cpulimit = "35"
	I1213 10:29:44.895234  941476 command_runner.go:130] > # Where:
	I1213 10:29:44.895238  941476 command_runner.go:130] > # The workload name is workload-type.
	I1213 10:29:44.895245  941476 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1213 10:29:44.895250  941476 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1213 10:29:44.895259  941476 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1213 10:29:44.895267  941476 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1213 10:29:44.895274  941476 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1213 10:29:44.895279  941476 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1213 10:29:44.895286  941476 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1213 10:29:44.895290  941476 command_runner.go:130] > # Default value is set to true
	I1213 10:29:44.895294  941476 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1213 10:29:44.895300  941476 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1213 10:29:44.895305  941476 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1213 10:29:44.895309  941476 command_runner.go:130] > # Default value is set to 'false'
	I1213 10:29:44.895313  941476 command_runner.go:130] > # disable_hostport_mapping = false
	I1213 10:29:44.895318  941476 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1213 10:29:44.895326  941476 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1213 10:29:44.895334  941476 command_runner.go:130] > # timezone = ""
	I1213 10:29:44.895341  941476 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1213 10:29:44.895343  941476 command_runner.go:130] > #
	I1213 10:29:44.895349  941476 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1213 10:29:44.895355  941476 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1213 10:29:44.895358  941476 command_runner.go:130] > [crio.image]
	I1213 10:29:44.895364  941476 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1213 10:29:44.895368  941476 command_runner.go:130] > # default_transport = "docker://"
	I1213 10:29:44.895373  941476 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1213 10:29:44.895380  941476 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1213 10:29:44.895383  941476 command_runner.go:130] > # global_auth_file = ""
	I1213 10:29:44.895388  941476 command_runner.go:130] > # The image used to instantiate infra containers.
	I1213 10:29:44.895393  941476 command_runner.go:130] > # This option supports live configuration reload.
	I1213 10:29:44.895398  941476 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1213 10:29:44.895404  941476 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1213 10:29:44.895412  941476 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1213 10:29:44.895417  941476 command_runner.go:130] > # This option supports live configuration reload.
	I1213 10:29:44.895420  941476 command_runner.go:130] > # pause_image_auth_file = ""
	I1213 10:29:44.895426  941476 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1213 10:29:44.895432  941476 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1213 10:29:44.895438  941476 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1213 10:29:44.895444  941476 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1213 10:29:44.895448  941476 command_runner.go:130] > # pause_command = "/pause"
	I1213 10:29:44.895454  941476 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1213 10:29:44.895460  941476 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1213 10:29:44.895467  941476 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1213 10:29:44.895473  941476 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1213 10:29:44.895479  941476 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1213 10:29:44.895485  941476 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1213 10:29:44.895488  941476 command_runner.go:130] > # pinned_images = [
	I1213 10:29:44.895491  941476 command_runner.go:130] > # ]
	I1213 10:29:44.895497  941476 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1213 10:29:44.895503  941476 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1213 10:29:44.895512  941476 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1213 10:29:44.895519  941476 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1213 10:29:44.895524  941476 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1213 10:29:44.895529  941476 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1213 10:29:44.895534  941476 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1213 10:29:44.895540  941476 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1213 10:29:44.895547  941476 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1213 10:29:44.895554  941476 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1213 10:29:44.895559  941476 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1213 10:29:44.895564  941476 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1213 10:29:44.895570  941476 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1213 10:29:44.895576  941476 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1213 10:29:44.895580  941476 command_runner.go:130] > # changing them here.
	I1213 10:29:44.895586  941476 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1213 10:29:44.895590  941476 command_runner.go:130] > # insecure_registries = [
	I1213 10:29:44.895592  941476 command_runner.go:130] > # ]
	I1213 10:29:44.895598  941476 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1213 10:29:44.895603  941476 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1213 10:29:44.895609  941476 command_runner.go:130] > # image_volumes = "mkdir"
	I1213 10:29:44.895614  941476 command_runner.go:130] > # Temporary directory to use for storing big files
	I1213 10:29:44.895618  941476 command_runner.go:130] > # big_files_temporary_dir = ""
	I1213 10:29:44.895623  941476 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1213 10:29:44.895630  941476 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1213 10:29:44.895634  941476 command_runner.go:130] > # auto_reload_registries = false
	I1213 10:29:44.895641  941476 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1213 10:29:44.895651  941476 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1213 10:29:44.895657  941476 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1213 10:29:44.895662  941476 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1213 10:29:44.895666  941476 command_runner.go:130] > # The mode of short name resolution.
	I1213 10:29:44.895672  941476 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1213 10:29:44.895679  941476 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1213 10:29:44.895684  941476 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1213 10:29:44.895688  941476 command_runner.go:130] > # short_name_mode = "enforcing"
	I1213 10:29:44.895697  941476 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1213 10:29:44.895704  941476 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1213 10:29:44.895708  941476 command_runner.go:130] > # oci_artifact_mount_support = true
	I1213 10:29:44.895715  941476 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1213 10:29:44.895718  941476 command_runner.go:130] > # CNI plugins.
	I1213 10:29:44.895721  941476 command_runner.go:130] > [crio.network]
	I1213 10:29:44.895727  941476 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1213 10:29:44.895732  941476 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1213 10:29:44.895735  941476 command_runner.go:130] > # cni_default_network = ""
	I1213 10:29:44.895741  941476 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1213 10:29:44.895745  941476 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1213 10:29:44.895751  941476 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1213 10:29:44.895754  941476 command_runner.go:130] > # plugin_dirs = [
	I1213 10:29:44.895758  941476 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1213 10:29:44.895760  941476 command_runner.go:130] > # ]
	I1213 10:29:44.895764  941476 command_runner.go:130] > # List of included pod metrics.
	I1213 10:29:44.895768  941476 command_runner.go:130] > # included_pod_metrics = [
	I1213 10:29:44.895771  941476 command_runner.go:130] > # ]
	I1213 10:29:44.895778  941476 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1213 10:29:44.895781  941476 command_runner.go:130] > [crio.metrics]
	I1213 10:29:44.895786  941476 command_runner.go:130] > # Globally enable or disable metrics support.
	I1213 10:29:44.895790  941476 command_runner.go:130] > # enable_metrics = false
	I1213 10:29:44.895794  941476 command_runner.go:130] > # Specify enabled metrics collectors.
	I1213 10:29:44.895799  941476 command_runner.go:130] > # Per default all metrics are enabled.
	I1213 10:29:44.895805  941476 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1213 10:29:44.895813  941476 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1213 10:29:44.895818  941476 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1213 10:29:44.895822  941476 command_runner.go:130] > # metrics_collectors = [
	I1213 10:29:44.895826  941476 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1213 10:29:44.895831  941476 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1213 10:29:44.895834  941476 command_runner.go:130] > # 	"containers_oom_total",
	I1213 10:29:44.895838  941476 command_runner.go:130] > # 	"processes_defunct",
	I1213 10:29:44.895842  941476 command_runner.go:130] > # 	"operations_total",
	I1213 10:29:44.895849  941476 command_runner.go:130] > # 	"operations_latency_seconds",
	I1213 10:29:44.895854  941476 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1213 10:29:44.895859  941476 command_runner.go:130] > # 	"operations_errors_total",
	I1213 10:29:44.895863  941476 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1213 10:29:44.895867  941476 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1213 10:29:44.895871  941476 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1213 10:29:44.895875  941476 command_runner.go:130] > # 	"image_pulls_success_total",
	I1213 10:29:44.895879  941476 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1213 10:29:44.895883  941476 command_runner.go:130] > # 	"containers_oom_count_total",
	I1213 10:29:44.895888  941476 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1213 10:29:44.895892  941476 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1213 10:29:44.895896  941476 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1213 10:29:44.895899  941476 command_runner.go:130] > # ]
	I1213 10:29:44.895905  941476 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1213 10:29:44.895908  941476 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1213 10:29:44.895913  941476 command_runner.go:130] > # The port on which the metrics server will listen.
	I1213 10:29:44.895917  941476 command_runner.go:130] > # metrics_port = 9090
	I1213 10:29:44.895922  941476 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1213 10:29:44.895925  941476 command_runner.go:130] > # metrics_socket = ""
	I1213 10:29:44.895930  941476 command_runner.go:130] > # The certificate for the secure metrics server.
	I1213 10:29:44.895937  941476 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1213 10:29:44.895943  941476 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1213 10:29:44.895947  941476 command_runner.go:130] > # certificate on any modification event.
	I1213 10:29:44.895951  941476 command_runner.go:130] > # metrics_cert = ""
	I1213 10:29:44.895955  941476 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1213 10:29:44.895960  941476 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1213 10:29:44.895963  941476 command_runner.go:130] > # metrics_key = ""
	I1213 10:29:44.895969  941476 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1213 10:29:44.895972  941476 command_runner.go:130] > [crio.tracing]
	I1213 10:29:44.895978  941476 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1213 10:29:44.895981  941476 command_runner.go:130] > # enable_tracing = false
	I1213 10:29:44.895987  941476 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1213 10:29:44.895991  941476 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1213 10:29:44.896000  941476 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1213 10:29:44.896007  941476 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1213 10:29:44.896011  941476 command_runner.go:130] > # CRI-O NRI configuration.
	I1213 10:29:44.896014  941476 command_runner.go:130] > [crio.nri]
	I1213 10:29:44.896018  941476 command_runner.go:130] > # Globally enable or disable NRI.
	I1213 10:29:44.896022  941476 command_runner.go:130] > # enable_nri = true
	I1213 10:29:44.896025  941476 command_runner.go:130] > # NRI socket to listen on.
	I1213 10:29:44.896030  941476 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1213 10:29:44.896034  941476 command_runner.go:130] > # NRI plugin directory to use.
	I1213 10:29:44.896038  941476 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1213 10:29:44.896043  941476 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1213 10:29:44.896051  941476 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1213 10:29:44.896057  941476 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1213 10:29:44.896113  941476 command_runner.go:130] > # nri_disable_connections = false
	I1213 10:29:44.896119  941476 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1213 10:29:44.896123  941476 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1213 10:29:44.896128  941476 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1213 10:29:44.896133  941476 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1213 10:29:44.896137  941476 command_runner.go:130] > # NRI default validator configuration.
	I1213 10:29:44.896144  941476 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1213 10:29:44.896150  941476 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1213 10:29:44.896155  941476 command_runner.go:130] > # can be restricted/rejected:
	I1213 10:29:44.896158  941476 command_runner.go:130] > # - OCI hook injection
	I1213 10:29:44.896163  941476 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1213 10:29:44.896167  941476 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1213 10:29:44.896172  941476 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1213 10:29:44.896176  941476 command_runner.go:130] > # - adjustment of linux namespaces
	I1213 10:29:44.896186  941476 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1213 10:29:44.896193  941476 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1213 10:29:44.896198  941476 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1213 10:29:44.896201  941476 command_runner.go:130] > #
	I1213 10:29:44.896205  941476 command_runner.go:130] > # [crio.nri.default_validator]
	I1213 10:29:44.896209  941476 command_runner.go:130] > # nri_enable_default_validator = false
	I1213 10:29:44.896218  941476 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1213 10:29:44.896223  941476 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1213 10:29:44.896229  941476 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1213 10:29:44.896234  941476 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1213 10:29:44.896239  941476 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1213 10:29:44.896243  941476 command_runner.go:130] > # nri_validator_required_plugins = [
	I1213 10:29:44.896245  941476 command_runner.go:130] > # ]
	I1213 10:29:44.896251  941476 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1213 10:29:44.896257  941476 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1213 10:29:44.896261  941476 command_runner.go:130] > [crio.stats]
	I1213 10:29:44.896267  941476 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1213 10:29:44.896272  941476 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1213 10:29:44.896276  941476 command_runner.go:130] > # stats_collection_period = 0
	I1213 10:29:44.896281  941476 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1213 10:29:44.896287  941476 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1213 10:29:44.896291  941476 command_runner.go:130] > # collection_period = 0
	I1213 10:29:44.896753  941476 command_runner.go:130] ! time="2025-12-13T10:29:44.865564739Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1213 10:29:44.896774  941476 command_runner.go:130] ! time="2025-12-13T10:29:44.865608538Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1213 10:29:44.896784  941476 command_runner.go:130] ! time="2025-12-13T10:29:44.865641285Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1213 10:29:44.896793  941476 command_runner.go:130] ! time="2025-12-13T10:29:44.86566636Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1213 10:29:44.896803  941476 command_runner.go:130] ! time="2025-12-13T10:29:44.865746328Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:29:44.896812  941476 command_runner.go:130] ! time="2025-12-13T10:29:44.866102466Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1213 10:29:44.896826  941476 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1213 10:29:44.896949  941476 cni.go:84] Creating CNI manager for ""
	I1213 10:29:44.896967  941476 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 10:29:44.896990  941476 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 10:29:44.897016  941476 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-200955 NodeName:functional-200955 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 10:29:44.897147  941476 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-200955"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 10:29:44.897221  941476 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 10:29:44.904800  941476 command_runner.go:130] > kubeadm
	I1213 10:29:44.904821  941476 command_runner.go:130] > kubectl
	I1213 10:29:44.904825  941476 command_runner.go:130] > kubelet
	I1213 10:29:44.905083  941476 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 10:29:44.905149  941476 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 10:29:44.912855  941476 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1213 10:29:44.926542  941476 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 10:29:44.940018  941476 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1213 10:29:44.953058  941476 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1213 10:29:44.956927  941476 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1213 10:29:44.957067  941476 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:29:45.090811  941476 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 10:29:45.111343  941476 certs.go:69] Setting up /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955 for IP: 192.168.49.2
	I1213 10:29:45.111425  941476 certs.go:195] generating shared ca certs ...
	I1213 10:29:45.111459  941476 certs.go:227] acquiring lock for ca certs: {Name:mk8a4f8a0a31c02fdf751ce601bdbbea6f5a03e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:29:45.111653  941476 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22128-904040/.minikube/ca.key
	I1213 10:29:45.111736  941476 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22128-904040/.minikube/proxy-client-ca.key
	I1213 10:29:45.111762  941476 certs.go:257] generating profile certs ...
	I1213 10:29:45.111936  941476 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/client.key
	I1213 10:29:45.112043  941476 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/apiserver.key.8da389ed
	I1213 10:29:45.112141  941476 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/proxy-client.key
	I1213 10:29:45.112183  941476 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-904040/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1213 10:29:45.112222  941476 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-904040/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1213 10:29:45.112262  941476 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-904040/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1213 10:29:45.112293  941476 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-904040/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1213 10:29:45.112328  941476 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1213 10:29:45.112371  941476 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1213 10:29:45.112404  941476 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1213 10:29:45.112444  941476 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1213 10:29:45.112521  941476 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/907484.pem (1338 bytes)
	W1213 10:29:45.112600  941476 certs.go:480] ignoring /home/jenkins/minikube-integration/22128-904040/.minikube/certs/907484_empty.pem, impossibly tiny 0 bytes
	I1213 10:29:45.112629  941476 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 10:29:45.112687  941476 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca.pem (1082 bytes)
	I1213 10:29:45.112733  941476 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/cert.pem (1123 bytes)
	I1213 10:29:45.112831  941476 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/key.pem (1675 bytes)
	I1213 10:29:45.113060  941476 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-904040/.minikube/files/etc/ssl/certs/9074842.pem (1708 bytes)
	I1213 10:29:45.113147  941476 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/907484.pem -> /usr/share/ca-certificates/907484.pem
	I1213 10:29:45.113186  941476 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-904040/.minikube/files/etc/ssl/certs/9074842.pem -> /usr/share/ca-certificates/9074842.pem
	I1213 10:29:45.113227  941476 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-904040/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:29:45.113935  941476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 10:29:45.163864  941476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 10:29:45.189286  941476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 10:29:45.237278  941476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 10:29:45.263467  941476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 10:29:45.289513  941476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 10:29:45.309018  941476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 10:29:45.329141  941476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 10:29:45.347665  941476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/certs/907484.pem --> /usr/share/ca-certificates/907484.pem (1338 bytes)
	I1213 10:29:45.365433  941476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/files/etc/ssl/certs/9074842.pem --> /usr/share/ca-certificates/9074842.pem (1708 bytes)
	I1213 10:29:45.383209  941476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 10:29:45.402144  941476 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 10:29:45.415520  941476 ssh_runner.go:195] Run: openssl version
	I1213 10:29:45.421431  941476 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1213 10:29:45.421939  941476 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/907484.pem
	I1213 10:29:45.429504  941476 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/907484.pem /etc/ssl/certs/907484.pem
	I1213 10:29:45.436991  941476 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/907484.pem
	I1213 10:29:45.440561  941476 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 13 10:21 /usr/share/ca-certificates/907484.pem
	I1213 10:29:45.440796  941476 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 10:21 /usr/share/ca-certificates/907484.pem
	I1213 10:29:45.440864  941476 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/907484.pem
	I1213 10:29:45.483791  941476 command_runner.go:130] > 51391683
	I1213 10:29:45.484209  941476 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 10:29:45.491520  941476 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9074842.pem
	I1213 10:29:45.498932  941476 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9074842.pem /etc/ssl/certs/9074842.pem
	I1213 10:29:45.509018  941476 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9074842.pem
	I1213 10:29:45.513215  941476 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 13 10:21 /usr/share/ca-certificates/9074842.pem
	I1213 10:29:45.513301  941476 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 10:21 /usr/share/ca-certificates/9074842.pem
	I1213 10:29:45.513386  941476 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9074842.pem
	I1213 10:29:45.554662  941476 command_runner.go:130] > 3ec20f2e
	I1213 10:29:45.555104  941476 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 10:29:45.562598  941476 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:29:45.570035  941476 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 10:29:45.578308  941476 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:29:45.582322  941476 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 13 10:11 /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:29:45.582399  941476 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 10:11 /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:29:45.582459  941476 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:29:45.623357  941476 command_runner.go:130] > b5213941
	I1213 10:29:45.623846  941476 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 10:29:45.631423  941476 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 10:29:45.635203  941476 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 10:29:45.635226  941476 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1213 10:29:45.635232  941476 command_runner.go:130] > Device: 259,1	Inode: 1052598     Links: 1
	I1213 10:29:45.635239  941476 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1213 10:29:45.635245  941476 command_runner.go:130] > Access: 2025-12-13 10:25:37.832562674 +0000
	I1213 10:29:45.635250  941476 command_runner.go:130] > Modify: 2025-12-13 10:21:33.766304384 +0000
	I1213 10:29:45.635255  941476 command_runner.go:130] > Change: 2025-12-13 10:21:33.766304384 +0000
	I1213 10:29:45.635260  941476 command_runner.go:130] >  Birth: 2025-12-13 10:21:33.766304384 +0000
	I1213 10:29:45.635337  941476 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 10:29:45.676331  941476 command_runner.go:130] > Certificate will not expire
	I1213 10:29:45.676780  941476 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 10:29:45.719984  941476 command_runner.go:130] > Certificate will not expire
	I1213 10:29:45.720440  941476 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 10:29:45.763044  941476 command_runner.go:130] > Certificate will not expire
	I1213 10:29:45.763152  941476 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 10:29:45.804752  941476 command_runner.go:130] > Certificate will not expire
	I1213 10:29:45.805187  941476 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 10:29:45.846806  941476 command_runner.go:130] > Certificate will not expire
	I1213 10:29:45.847203  941476 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 10:29:45.898203  941476 command_runner.go:130] > Certificate will not expire
	I1213 10:29:45.898680  941476 kubeadm.go:401] StartCluster: {Name:functional-200955 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-200955 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:29:45.898809  941476 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 10:29:45.898933  941476 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 10:29:45.924889  941476 cri.go:89] found id: ""
	I1213 10:29:45.924989  941476 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 10:29:45.932161  941476 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1213 10:29:45.932226  941476 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1213 10:29:45.932248  941476 command_runner.go:130] > /var/lib/minikube/etcd:
	I1213 10:29:45.933123  941476 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 10:29:45.933177  941476 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 10:29:45.933244  941476 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 10:29:45.940638  941476 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 10:29:45.941072  941476 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-200955" does not appear in /home/jenkins/minikube-integration/22128-904040/kubeconfig
	I1213 10:29:45.941185  941476 kubeconfig.go:62] /home/jenkins/minikube-integration/22128-904040/kubeconfig needs updating (will repair): [kubeconfig missing "functional-200955" cluster setting kubeconfig missing "functional-200955" context setting]
	I1213 10:29:45.941452  941476 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-904040/kubeconfig: {Name:mk623f80012ba74b924bdfcf4e2ec5178c2702f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:29:45.941955  941476 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22128-904040/kubeconfig
	I1213 10:29:45.942103  941476 kapi.go:59] client config for functional-200955: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/client.crt", KeyFile:"/home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/client.key", CAFile:"/home/jenkins/minikube-integration/22128-904040/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 10:29:45.942644  941476 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1213 10:29:45.942668  941476 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1213 10:29:45.942678  941476 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1213 10:29:45.942683  941476 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1213 10:29:45.942687  941476 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1213 10:29:45.942727  941476 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1213 10:29:45.943068  941476 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 10:29:45.951089  941476 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1213 10:29:45.951121  941476 kubeadm.go:602] duration metric: took 17.93243ms to restartPrimaryControlPlane
	I1213 10:29:45.951143  941476 kubeadm.go:403] duration metric: took 52.461003ms to StartCluster
	I1213 10:29:45.951159  941476 settings.go:142] acquiring lock: {Name:mk93988d167ba25bb331a8426f9b2f4ef25dd844 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:29:45.951223  941476 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22128-904040/kubeconfig
	I1213 10:29:45.951796  941476 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-904040/kubeconfig: {Name:mk623f80012ba74b924bdfcf4e2ec5178c2702f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:29:45.951989  941476 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 10:29:45.952368  941476 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 10:29:45.952448  941476 addons.go:70] Setting storage-provisioner=true in profile "functional-200955"
	I1213 10:29:45.952463  941476 addons.go:239] Setting addon storage-provisioner=true in "functional-200955"
	I1213 10:29:45.952488  941476 host.go:66] Checking if "functional-200955" exists ...
	I1213 10:29:45.952566  941476 config.go:182] Loaded profile config "functional-200955": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 10:29:45.952610  941476 addons.go:70] Setting default-storageclass=true in profile "functional-200955"
	I1213 10:29:45.952623  941476 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-200955"
	I1213 10:29:45.952911  941476 cli_runner.go:164] Run: docker container inspect functional-200955 --format={{.State.Status}}
	I1213 10:29:45.952951  941476 cli_runner.go:164] Run: docker container inspect functional-200955 --format={{.State.Status}}
	I1213 10:29:45.958523  941476 out.go:179] * Verifying Kubernetes components...
	I1213 10:29:45.963377  941476 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:29:45.989193  941476 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22128-904040/kubeconfig
	I1213 10:29:45.989357  941476 kapi.go:59] client config for functional-200955: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/client.crt", KeyFile:"/home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/client.key", CAFile:"/home/jenkins/minikube-integration/22128-904040/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 10:29:45.989643  941476 addons.go:239] Setting addon default-storageclass=true in "functional-200955"
	I1213 10:29:45.989674  941476 host.go:66] Checking if "functional-200955" exists ...
	I1213 10:29:45.990084  941476 cli_runner.go:164] Run: docker container inspect functional-200955 --format={{.State.Status}}
	I1213 10:29:45.996374  941476 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 10:29:45.999301  941476 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:29:45.999325  941476 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 10:29:45.999389  941476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-200955
	I1213 10:29:46.025120  941476 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 10:29:46.025146  941476 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 10:29:46.025210  941476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-200955
	I1213 10:29:46.047237  941476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33523 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/functional-200955/id_rsa Username:docker}
	I1213 10:29:46.065614  941476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33523 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/functional-200955/id_rsa Username:docker}
	I1213 10:29:46.182514  941476 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 10:29:46.188367  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:29:46.228034  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:29:46.975760  941476 node_ready.go:35] waiting up to 6m0s for node "functional-200955" to be "Ready" ...
	I1213 10:29:46.975884  941476 type.go:168] "Request Body" body=""
	I1213 10:29:46.975940  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:46.976159  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:29:46.976214  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:46.976242  941476 retry.go:31] will retry after 310.714541ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:46.976276  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:29:46.976296  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:46.976306  941476 retry.go:31] will retry after 212.322267ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:46.976367  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:29:47.188794  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:29:47.245508  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:29:47.249207  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:47.249253  941476 retry.go:31] will retry after 232.449188ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:47.287510  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:29:47.352377  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:29:47.355988  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:47.356022  941476 retry.go:31] will retry after 216.845813ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:47.476461  941476 type.go:168] "Request Body" body=""
	I1213 10:29:47.476540  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:47.476866  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:29:47.482125  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:29:47.540633  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:29:47.540674  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:47.540713  941476 retry.go:31] will retry after 621.150122ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:47.573847  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:29:47.632148  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:29:47.632198  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:47.632239  941476 retry.go:31] will retry after 652.105841ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:47.976625  941476 type.go:168] "Request Body" body=""
	I1213 10:29:47.976714  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:47.977047  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:29:48.162374  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:29:48.224014  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:29:48.224050  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:48.224096  941476 retry.go:31] will retry after 486.360631ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:48.285241  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:29:48.341512  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:29:48.345196  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:48.345232  941476 retry.go:31] will retry after 851.054667ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:48.476501  941476 type.go:168] "Request Body" body=""
	I1213 10:29:48.476654  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:48.477264  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:29:48.710766  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:29:48.774597  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:29:48.774656  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:48.774677  941476 retry.go:31] will retry after 1.42902923s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:48.976030  941476 type.go:168] "Request Body" body=""
	I1213 10:29:48.976124  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:48.976473  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:29:48.976568  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:29:49.197102  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:29:49.269601  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:29:49.269709  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:49.269757  941476 retry.go:31] will retry after 1.296706305s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:49.476109  941476 type.go:168] "Request Body" body=""
	I1213 10:29:49.476203  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:49.476573  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:29:49.976081  941476 type.go:168] "Request Body" body=""
	I1213 10:29:49.976179  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:49.976442  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:29:50.204048  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:29:50.263787  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:29:50.263835  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:50.263857  941476 retry.go:31] will retry after 2.257067811s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:50.476081  941476 type.go:168] "Request Body" body=""
	I1213 10:29:50.476171  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:50.476455  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:29:50.566907  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:29:50.629271  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:29:50.629314  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:50.629333  941476 retry.go:31] will retry after 1.765407868s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:50.976841  941476 type.go:168] "Request Body" body=""
	I1213 10:29:50.976923  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:50.977217  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:29:50.977269  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:29:51.475933  941476 type.go:168] "Request Body" body=""
	I1213 10:29:51.476012  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:51.476290  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:29:51.976028  941476 type.go:168] "Request Body" body=""
	I1213 10:29:51.976124  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:51.976454  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:29:52.395020  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:29:52.456823  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:29:52.456875  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:52.456899  941476 retry.go:31] will retry after 1.561909689s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:52.476063  941476 type.go:168] "Request Body" body=""
	I1213 10:29:52.476147  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:52.476449  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:29:52.521915  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:29:52.578203  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:29:52.581870  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:52.581904  941476 retry.go:31] will retry after 3.834800834s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:52.976296  941476 type.go:168] "Request Body" body=""
	I1213 10:29:52.976371  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:52.976640  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:29:53.476019  941476 type.go:168] "Request Body" body=""
	I1213 10:29:53.476103  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:53.476429  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:29:53.476481  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:29:53.976156  941476 type.go:168] "Request Body" body=""
	I1213 10:29:53.976238  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:53.976665  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:29:54.019913  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:29:54.081795  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:29:54.081851  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:54.081875  941476 retry.go:31] will retry after 4.858817388s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:54.476105  941476 type.go:168] "Request Body" body=""
	I1213 10:29:54.476182  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:54.476432  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:29:54.976004  941476 type.go:168] "Request Body" body=""
	I1213 10:29:54.976093  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:54.976415  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:29:55.476129  941476 type.go:168] "Request Body" body=""
	I1213 10:29:55.476226  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:55.476527  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:29:55.476588  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:29:55.976456  941476 type.go:168] "Request Body" body=""
	I1213 10:29:55.976520  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:55.976761  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:29:56.417572  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:29:56.476035  941476 type.go:168] "Request Body" body=""
	I1213 10:29:56.476118  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:56.476423  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:56.476511  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:29:56.480436  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:56.480483  941476 retry.go:31] will retry after 4.792687173s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:56.976051  941476 type.go:168] "Request Body" body=""
	I1213 10:29:56.976145  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:56.976494  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:29:57.475977  941476 type.go:168] "Request Body" body=""
	I1213 10:29:57.476051  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:57.476378  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:29:57.976104  941476 type.go:168] "Request Body" body=""
	I1213 10:29:57.976249  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:57.976601  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:29:57.976655  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:29:58.476178  941476 type.go:168] "Request Body" body=""
	I1213 10:29:58.476277  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:58.476612  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:29:58.940954  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:29:58.976372  941476 type.go:168] "Request Body" body=""
	I1213 10:29:58.976458  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:58.976716  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:29:59.010699  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:29:59.010740  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:59.010759  941476 retry.go:31] will retry after 7.734765537s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:59.476520  941476 type.go:168] "Request Body" body=""
	I1213 10:29:59.476594  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:59.476930  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:29:59.976794  941476 type.go:168] "Request Body" body=""
	I1213 10:29:59.976872  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:59.977198  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:29:59.977252  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:00.476972  941476 type.go:168] "Request Body" body=""
	I1213 10:30:00.477066  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:00.477383  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:00.976114  941476 type.go:168] "Request Body" body=""
	I1213 10:30:00.976196  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:00.976547  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:01.274155  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:30:01.347774  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:30:01.347813  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:30:01.347834  941476 retry.go:31] will retry after 9.325183697s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:30:01.478515  941476 type.go:168] "Request Body" body=""
	I1213 10:30:01.478628  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:01.479014  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:01.976839  941476 type.go:168] "Request Body" body=""
	I1213 10:30:01.976947  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:01.977331  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:01.977404  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:02.476030  941476 type.go:168] "Request Body" body=""
	I1213 10:30:02.476139  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:02.476537  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:02.976170  941476 type.go:168] "Request Body" body=""
	I1213 10:30:02.976275  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:02.976649  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:03.476192  941476 type.go:168] "Request Body" body=""
	I1213 10:30:03.476276  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:03.476538  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:03.976228  941476 type.go:168] "Request Body" body=""
	I1213 10:30:03.976352  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:03.976726  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:04.476318  941476 type.go:168] "Request Body" body=""
	I1213 10:30:04.476410  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:04.476740  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:04.476799  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:04.976561  941476 type.go:168] "Request Body" body=""
	I1213 10:30:04.976631  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:04.976878  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:05.476699  941476 type.go:168] "Request Body" body=""
	I1213 10:30:05.476787  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:05.477120  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:05.977016  941476 type.go:168] "Request Body" body=""
	I1213 10:30:05.977144  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:05.977510  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:06.475991  941476 type.go:168] "Request Body" body=""
	I1213 10:30:06.476060  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:06.476330  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:06.746112  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:30:06.805144  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:30:06.808651  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:30:06.808685  941476 retry.go:31] will retry after 7.088599712s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:30:06.976026  941476 type.go:168] "Request Body" body=""
	I1213 10:30:06.976116  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:06.976437  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:06.976507  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:07.476202  941476 type.go:168] "Request Body" body=""
	I1213 10:30:07.476279  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:07.476634  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:07.976084  941476 type.go:168] "Request Body" body=""
	I1213 10:30:07.976170  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:07.976444  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:08.476024  941476 type.go:168] "Request Body" body=""
	I1213 10:30:08.476153  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:08.476482  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:08.976213  941476 type.go:168] "Request Body" body=""
	I1213 10:30:08.976308  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:08.976642  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:08.976701  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:09.476115  941476 type.go:168] "Request Body" body=""
	I1213 10:30:09.476212  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:09.476464  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:09.976032  941476 type.go:168] "Request Body" body=""
	I1213 10:30:09.976107  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:09.976492  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:10.476265  941476 type.go:168] "Request Body" body=""
	I1213 10:30:10.476368  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:10.476715  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:10.673230  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:30:10.732312  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:30:10.736051  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:30:10.736087  941476 retry.go:31] will retry after 8.123592788s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:30:10.976475  941476 type.go:168] "Request Body" body=""
	I1213 10:30:10.976550  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:10.976847  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:10.976888  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:11.476725  941476 type.go:168] "Request Body" body=""
	I1213 10:30:11.476822  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:11.477169  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:11.976044  941476 type.go:168] "Request Body" body=""
	I1213 10:30:11.976120  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:11.976458  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:12.476202  941476 type.go:168] "Request Body" body=""
	I1213 10:30:12.476278  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:12.476542  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:12.976059  941476 type.go:168] "Request Body" body=""
	I1213 10:30:12.976141  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:12.976473  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:13.476058  941476 type.go:168] "Request Body" body=""
	I1213 10:30:13.476137  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:13.476490  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:13.476548  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:13.898101  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:30:13.964340  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:30:13.967836  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:30:13.967879  941476 retry.go:31] will retry after 8.492520723s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:30:13.975991  941476 type.go:168] "Request Body" body=""
	I1213 10:30:13.976068  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:13.976327  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:14.476033  941476 type.go:168] "Request Body" body=""
	I1213 10:30:14.476139  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:14.476442  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:14.976067  941476 type.go:168] "Request Body" body=""
	I1213 10:30:14.976142  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:14.976454  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:15.475986  941476 type.go:168] "Request Body" body=""
	I1213 10:30:15.476080  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:15.476459  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:15.975941  941476 type.go:168] "Request Body" body=""
	I1213 10:30:15.976026  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:15.976392  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:15.976452  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:16.476065  941476 type.go:168] "Request Body" body=""
	I1213 10:30:16.476159  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:16.476450  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:16.975992  941476 type.go:168] "Request Body" body=""
	I1213 10:30:16.976102  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:16.976412  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:17.476049  941476 type.go:168] "Request Body" body=""
	I1213 10:30:17.476174  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:17.476445  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:17.976100  941476 type.go:168] "Request Body" body=""
	I1213 10:30:17.976180  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:17.976600  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:17.976654  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:18.475986  941476 type.go:168] "Request Body" body=""
	I1213 10:30:18.476079  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:18.476393  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:18.859953  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:30:18.916800  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:30:18.920763  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:30:18.920813  941476 retry.go:31] will retry after 11.17407044s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:30:18.976006  941476 type.go:168] "Request Body" body=""
	I1213 10:30:18.976089  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:18.976434  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:19.476057  941476 type.go:168] "Request Body" body=""
	I1213 10:30:19.476156  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:19.476511  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:19.975977  941476 type.go:168] "Request Body" body=""
	I1213 10:30:19.976055  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:19.976310  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:20.476019  941476 type.go:168] "Request Body" body=""
	I1213 10:30:20.476128  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:20.476491  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:20.476556  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:20.976222  941476 type.go:168] "Request Body" body=""
	I1213 10:30:20.976298  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:20.976627  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:21.476132  941476 type.go:168] "Request Body" body=""
	I1213 10:30:21.476230  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:21.476520  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:21.976457  941476 type.go:168] "Request Body" body=""
	I1213 10:30:21.976534  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:21.976932  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:22.460571  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:30:22.476131  941476 type.go:168] "Request Body" body=""
	I1213 10:30:22.476203  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:22.476465  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:22.521379  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:30:22.525059  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:30:22.525092  941476 retry.go:31] will retry after 25.139993985s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:30:22.976652  941476 type.go:168] "Request Body" body=""
	I1213 10:30:22.976730  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:22.976986  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:22.977026  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:23.476843  941476 type.go:168] "Request Body" body=""
	I1213 10:30:23.476919  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:23.477283  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:23.975970  941476 type.go:168] "Request Body" body=""
	I1213 10:30:23.976053  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:23.976449  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:24.476161  941476 type.go:168] "Request Body" body=""
	I1213 10:30:24.476245  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:24.476513  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:24.976034  941476 type.go:168] "Request Body" body=""
	I1213 10:30:24.976147  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:24.976481  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:25.476274  941476 type.go:168] "Request Body" body=""
	I1213 10:30:25.476347  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:25.476670  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:25.476736  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:25.976627  941476 type.go:168] "Request Body" body=""
	I1213 10:30:25.976707  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:25.976951  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:26.476709  941476 type.go:168] "Request Body" body=""
	I1213 10:30:26.476781  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:26.477095  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:26.976010  941476 type.go:168] "Request Body" body=""
	I1213 10:30:26.976085  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:26.976390  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:27.475998  941476 type.go:168] "Request Body" body=""
	I1213 10:30:27.476197  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:27.476524  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:27.976149  941476 type.go:168] "Request Body" body=""
	I1213 10:30:27.976232  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:27.976587  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:27.976691  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:28.476061  941476 type.go:168] "Request Body" body=""
	I1213 10:30:28.476140  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:28.476466  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:28.975994  941476 type.go:168] "Request Body" body=""
	I1213 10:30:28.976062  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:28.976382  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:29.475998  941476 type.go:168] "Request Body" body=""
	I1213 10:30:29.476091  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:29.476426  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:29.976195  941476 type.go:168] "Request Body" body=""
	I1213 10:30:29.976285  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:29.976645  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:30.096045  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:30:30.160844  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:30:30.160891  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:30:30.160917  941476 retry.go:31] will retry after 23.835716192s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:30:30.476291  941476 type.go:168] "Request Body" body=""
	I1213 10:30:30.476381  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:30.476623  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:30.476662  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:30.976005  941476 type.go:168] "Request Body" body=""
	I1213 10:30:30.976079  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:30.976448  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:31.475993  941476 type.go:168] "Request Body" body=""
	I1213 10:30:31.476105  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:31.476447  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:31.976396  941476 type.go:168] "Request Body" body=""
	I1213 10:30:31.976460  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:31.976719  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:32.476555  941476 type.go:168] "Request Body" body=""
	I1213 10:30:32.476640  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:32.476947  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:32.476999  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:32.976734  941476 type.go:168] "Request Body" body=""
	I1213 10:30:32.976812  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:32.977150  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:33.476870  941476 type.go:168] "Request Body" body=""
	I1213 10:30:33.476937  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:33.477226  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:33.975973  941476 type.go:168] "Request Body" body=""
	I1213 10:30:33.976043  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:33.976419  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:34.476027  941476 type.go:168] "Request Body" body=""
	I1213 10:30:34.476101  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:34.476510  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:34.975996  941476 type.go:168] "Request Body" body=""
	I1213 10:30:34.976068  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:34.976382  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:34.976435  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:35.476032  941476 type.go:168] "Request Body" body=""
	I1213 10:30:35.476153  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:35.476480  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:35.975911  941476 type.go:168] "Request Body" body=""
	I1213 10:30:35.975996  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:35.976291  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:36.475989  941476 type.go:168] "Request Body" body=""
	I1213 10:30:36.476057  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:36.476387  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:36.976488  941476 type.go:168] "Request Body" body=""
	I1213 10:30:36.976570  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:36.976951  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:36.977012  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:37.476783  941476 type.go:168] "Request Body" body=""
	I1213 10:30:37.476896  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:37.477216  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:37.975936  941476 type.go:168] "Request Body" body=""
	I1213 10:30:37.976015  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:37.976268  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:38.476001  941476 type.go:168] "Request Body" body=""
	I1213 10:30:38.476091  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:38.476424  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:38.976022  941476 type.go:168] "Request Body" body=""
	I1213 10:30:38.976096  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:38.976428  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:39.476003  941476 type.go:168] "Request Body" body=""
	I1213 10:30:39.476084  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:39.476362  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:39.476402  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:39.975986  941476 type.go:168] "Request Body" body=""
	I1213 10:30:39.976076  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:39.976383  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:40.476036  941476 type.go:168] "Request Body" body=""
	I1213 10:30:40.476132  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:40.476454  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:40.976176  941476 type.go:168] "Request Body" body=""
	I1213 10:30:40.976252  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:40.976500  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:41.476026  941476 type.go:168] "Request Body" body=""
	I1213 10:30:41.476124  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:41.476456  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:41.476514  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:41.976469  941476 type.go:168] "Request Body" body=""
	I1213 10:30:41.976585  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:41.976895  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:42.476663  941476 type.go:168] "Request Body" body=""
	I1213 10:30:42.476728  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:42.477006  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:42.976839  941476 type.go:168] "Request Body" body=""
	I1213 10:30:42.976919  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:42.980297  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 10:30:43.476086  941476 type.go:168] "Request Body" body=""
	I1213 10:30:43.476186  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:43.476547  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:43.476623  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:43.976205  941476 type.go:168] "Request Body" body=""
	I1213 10:30:43.976276  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:43.976547  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:44.476019  941476 type.go:168] "Request Body" body=""
	I1213 10:30:44.476113  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:44.476466  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:44.976036  941476 type.go:168] "Request Body" body=""
	I1213 10:30:44.976111  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:44.976440  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:45.475987  941476 type.go:168] "Request Body" body=""
	I1213 10:30:45.476056  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:45.476331  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:45.975925  941476 type.go:168] "Request Body" body=""
	I1213 10:30:45.976003  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:45.976327  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:45.976382  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:46.476024  941476 type.go:168] "Request Body" body=""
	I1213 10:30:46.476103  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:46.476455  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:46.976213  941476 type.go:168] "Request Body" body=""
	I1213 10:30:46.976285  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:46.976538  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:47.475994  941476 type.go:168] "Request Body" body=""
	I1213 10:30:47.476069  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:47.476399  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:47.665860  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:30:47.731394  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:30:47.731441  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:30:47.731460  941476 retry.go:31] will retry after 19.194003802s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:30:47.975899  941476 type.go:168] "Request Body" body=""
	I1213 10:30:47.975974  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:47.976303  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:48.475998  941476 type.go:168] "Request Body" body=""
	I1213 10:30:48.476084  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:48.476410  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:48.476469  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:48.976020  941476 type.go:168] "Request Body" body=""
	I1213 10:30:48.976114  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:48.976440  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:49.476036  941476 type.go:168] "Request Body" body=""
	I1213 10:30:49.476112  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:49.476434  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:49.976100  941476 type.go:168] "Request Body" body=""
	I1213 10:30:49.976167  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:49.976437  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:50.476044  941476 type.go:168] "Request Body" body=""
	I1213 10:30:50.476115  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:50.476434  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:50.976044  941476 type.go:168] "Request Body" body=""
	I1213 10:30:50.976126  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:50.976458  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:50.976519  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:51.476190  941476 type.go:168] "Request Body" body=""
	I1213 10:30:51.476257  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:51.476607  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:51.976524  941476 type.go:168] "Request Body" body=""
	I1213 10:30:51.976618  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:51.976938  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:52.476676  941476 type.go:168] "Request Body" body=""
	I1213 10:30:52.476768  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:52.477095  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:52.976699  941476 type.go:168] "Request Body" body=""
	I1213 10:30:52.976774  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:52.977061  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:52.977104  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:53.476895  941476 type.go:168] "Request Body" body=""
	I1213 10:30:53.476971  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:53.477260  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:53.975931  941476 type.go:168] "Request Body" body=""
	I1213 10:30:53.976008  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:53.976338  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:53.997712  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:30:54.059604  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:30:54.063660  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:30:54.063694  941476 retry.go:31] will retry after 30.126310408s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:30:54.475958  941476 type.go:168] "Request Body" body=""
	I1213 10:30:54.476070  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:54.476392  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:54.976060  941476 type.go:168] "Request Body" body=""
	I1213 10:30:54.976148  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:54.976488  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:55.476185  941476 type.go:168] "Request Body" body=""
	I1213 10:30:55.476260  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:55.476583  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:55.476642  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:55.976527  941476 type.go:168] "Request Body" body=""
	I1213 10:30:55.976599  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:55.976860  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:56.476675  941476 type.go:168] "Request Body" body=""
	I1213 10:30:56.476769  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:56.477141  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:56.976045  941476 type.go:168] "Request Body" body=""
	I1213 10:30:56.976119  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:56.976449  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:57.476156  941476 type.go:168] "Request Body" body=""
	I1213 10:30:57.476236  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:57.476486  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:57.976022  941476 type.go:168] "Request Body" body=""
	I1213 10:30:57.976100  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:57.976440  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:57.976502  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:58.476042  941476 type.go:168] "Request Body" body=""
	I1213 10:30:58.476124  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:58.476455  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:58.976150  941476 type.go:168] "Request Body" body=""
	I1213 10:30:58.976235  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:58.976490  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:59.476182  941476 type.go:168] "Request Body" body=""
	I1213 10:30:59.476288  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:59.476621  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:59.976365  941476 type.go:168] "Request Body" body=""
	I1213 10:30:59.976444  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:59.976775  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:59.976845  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:00.476639  941476 type.go:168] "Request Body" body=""
	I1213 10:31:00.476719  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:00.477025  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:00.976827  941476 type.go:168] "Request Body" body=""
	I1213 10:31:00.976918  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:00.977328  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:01.475937  941476 type.go:168] "Request Body" body=""
	I1213 10:31:01.476035  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:01.476377  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:01.976053  941476 type.go:168] "Request Body" body=""
	I1213 10:31:01.976138  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:01.976399  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:02.476026  941476 type.go:168] "Request Body" body=""
	I1213 10:31:02.476102  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:02.476453  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:02.476508  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:02.976184  941476 type.go:168] "Request Body" body=""
	I1213 10:31:02.976261  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:02.976604  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:03.476321  941476 type.go:168] "Request Body" body=""
	I1213 10:31:03.476405  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:03.476656  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:03.975989  941476 type.go:168] "Request Body" body=""
	I1213 10:31:03.976062  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:03.976373  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:04.476027  941476 type.go:168] "Request Body" body=""
	I1213 10:31:04.476107  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:04.476440  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:04.976145  941476 type.go:168] "Request Body" body=""
	I1213 10:31:04.976215  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:04.976528  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:04.976587  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:05.476046  941476 type.go:168] "Request Body" body=""
	I1213 10:31:05.476128  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:05.476503  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:05.976329  941476 type.go:168] "Request Body" body=""
	I1213 10:31:05.976404  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:05.976818  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:06.476644  941476 type.go:168] "Request Body" body=""
	I1213 10:31:06.476727  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:06.476990  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:06.925824  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:31:06.976406  941476 type.go:168] "Request Body" body=""
	I1213 10:31:06.976485  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:06.976757  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:06.976800  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:06.991385  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:06.991438  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:06.991540  941476 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 10:31:07.476000  941476 type.go:168] "Request Body" body=""
	I1213 10:31:07.476110  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:07.476475  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:07.976033  941476 type.go:168] "Request Body" body=""
	I1213 10:31:07.976116  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:07.976413  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:08.476065  941476 type.go:168] "Request Body" body=""
	I1213 10:31:08.476162  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:08.476480  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:08.976217  941476 type.go:168] "Request Body" body=""
	I1213 10:31:08.976318  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:08.976675  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:09.476351  941476 type.go:168] "Request Body" body=""
	I1213 10:31:09.476424  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:09.476761  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:09.476820  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:09.976571  941476 type.go:168] "Request Body" body=""
	I1213 10:31:09.976678  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:09.977059  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:10.476721  941476 type.go:168] "Request Body" body=""
	I1213 10:31:10.476799  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:10.477208  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:10.975925  941476 type.go:168] "Request Body" body=""
	I1213 10:31:10.975997  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:10.976250  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:11.475973  941476 type.go:168] "Request Body" body=""
	I1213 10:31:11.476050  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:11.476395  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:11.976476  941476 type.go:168] "Request Body" body=""
	I1213 10:31:11.976551  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:11.976955  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:11.977016  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:12.476754  941476 type.go:168] "Request Body" body=""
	I1213 10:31:12.476839  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:12.477117  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:12.976506  941476 type.go:168] "Request Body" body=""
	I1213 10:31:12.976583  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:12.976915  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:13.476748  941476 type.go:168] "Request Body" body=""
	I1213 10:31:13.476846  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:13.477198  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:13.975894  941476 type.go:168] "Request Body" body=""
	I1213 10:31:13.975961  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:13.976227  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:14.475943  941476 type.go:168] "Request Body" body=""
	I1213 10:31:14.476062  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:14.476400  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:14.476469  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:14.976011  941476 type.go:168] "Request Body" body=""
	I1213 10:31:14.976112  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:14.976509  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:15.476219  941476 type.go:168] "Request Body" body=""
	I1213 10:31:15.476292  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:15.476567  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:15.976650  941476 type.go:168] "Request Body" body=""
	I1213 10:31:15.976734  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:15.977073  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:16.476854  941476 type.go:168] "Request Body" body=""
	I1213 10:31:16.476948  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:16.477273  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:16.477330  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:16.976000  941476 type.go:168] "Request Body" body=""
	I1213 10:31:16.976073  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:16.976427  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:17.476545  941476 type.go:168] "Request Body" body=""
	I1213 10:31:17.476677  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:17.477181  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:17.976852  941476 type.go:168] "Request Body" body=""
	I1213 10:31:17.976935  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:17.977261  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:18.475978  941476 type.go:168] "Request Body" body=""
	I1213 10:31:18.476056  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:18.476322  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:18.976069  941476 type.go:168] "Request Body" body=""
	I1213 10:31:18.976149  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:18.976500  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:18.976571  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:19.476245  941476 type.go:168] "Request Body" body=""
	I1213 10:31:19.476328  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:19.476669  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:19.976355  941476 type.go:168] "Request Body" body=""
	I1213 10:31:19.976423  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:19.976681  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:20.476070  941476 type.go:168] "Request Body" body=""
	I1213 10:31:20.476146  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:20.476464  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:20.976239  941476 type.go:168] "Request Body" body=""
	I1213 10:31:20.976313  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:20.976664  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:20.976722  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:21.476108  941476 type.go:168] "Request Body" body=""
	I1213 10:31:21.476196  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:21.476546  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:21.976459  941476 type.go:168] "Request Body" body=""
	I1213 10:31:21.976535  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:21.976854  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:22.476728  941476 type.go:168] "Request Body" body=""
	I1213 10:31:22.476820  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:22.477138  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:22.976866  941476 type.go:168] "Request Body" body=""
	I1213 10:31:22.976937  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:22.977188  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:22.977229  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:23.475910  941476 type.go:168] "Request Body" body=""
	I1213 10:31:23.475992  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:23.476337  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:23.976033  941476 type.go:168] "Request Body" body=""
	I1213 10:31:23.976146  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:23.976483  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:24.190915  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:31:24.248888  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:24.248934  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:24.249045  941476 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 10:31:24.254122  941476 out.go:179] * Enabled addons: 
	I1213 10:31:24.256914  941476 addons.go:530] duration metric: took 1m38.304545325s for enable addons: enabled=[]
	I1213 10:31:24.476214  941476 type.go:168] "Request Body" body=""
	I1213 10:31:24.476305  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:24.476571  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:24.976075  941476 type.go:168] "Request Body" body=""
	I1213 10:31:24.976150  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:24.976469  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:25.475994  941476 type.go:168] "Request Body" body=""
	I1213 10:31:25.476100  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:25.476424  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:25.476482  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:25.976304  941476 type.go:168] "Request Body" body=""
	I1213 10:31:25.976372  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:25.976622  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:26.476058  941476 type.go:168] "Request Body" body=""
	I1213 10:31:26.476134  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:26.476464  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:26.976032  941476 type.go:168] "Request Body" body=""
	I1213 10:31:26.976107  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:26.976412  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:27.475988  941476 type.go:168] "Request Body" body=""
	I1213 10:31:27.476056  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:27.476317  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:27.976108  941476 type.go:168] "Request Body" body=""
	I1213 10:31:27.976196  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:27.976535  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:27.976591  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:28.476254  941476 type.go:168] "Request Body" body=""
	I1213 10:31:28.476381  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:28.476716  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:28.975973  941476 type.go:168] "Request Body" body=""
	I1213 10:31:28.976047  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:28.976353  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:29.476048  941476 type.go:168] "Request Body" body=""
	I1213 10:31:29.476126  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:29.476474  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:29.976170  941476 type.go:168] "Request Body" body=""
	I1213 10:31:29.976247  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:29.976617  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:29.976678  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:30.476323  941476 type.go:168] "Request Body" body=""
	I1213 10:31:30.476391  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:30.476664  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:30.976054  941476 type.go:168] "Request Body" body=""
	I1213 10:31:30.976128  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:30.976456  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:31.476168  941476 type.go:168] "Request Body" body=""
	I1213 10:31:31.476269  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:31.476567  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:31.976505  941476 type.go:168] "Request Body" body=""
	I1213 10:31:31.976574  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:31.976850  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:31.976891  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:32.476715  941476 type.go:168] "Request Body" body=""
	I1213 10:31:32.476794  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:32.477154  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:32.976964  941476 type.go:168] "Request Body" body=""
	I1213 10:31:32.977041  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:32.977388  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:33.476013  941476 type.go:168] "Request Body" body=""
	I1213 10:31:33.476079  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:33.476329  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:33.976034  941476 type.go:168] "Request Body" body=""
	I1213 10:31:33.976119  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:33.976457  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:34.476014  941476 type.go:168] "Request Body" body=""
	I1213 10:31:34.476100  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:34.476438  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:34.476494  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:34.976011  941476 type.go:168] "Request Body" body=""
	I1213 10:31:34.976087  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:34.976342  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:35.476018  941476 type.go:168] "Request Body" body=""
	I1213 10:31:35.476143  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:35.476462  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:35.976397  941476 type.go:168] "Request Body" body=""
	I1213 10:31:35.976481  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:35.976852  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:36.476416  941476 type.go:168] "Request Body" body=""
	I1213 10:31:36.476490  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:36.476745  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:36.476785  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:36.976682  941476 type.go:168] "Request Body" body=""
	I1213 10:31:36.976776  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:36.977178  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:37.476965  941476 type.go:168] "Request Body" body=""
	I1213 10:31:37.477045  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:37.477383  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:37.976029  941476 type.go:168] "Request Body" body=""
	I1213 10:31:37.976095  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:37.976361  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:38.476025  941476 type.go:168] "Request Body" body=""
	I1213 10:31:38.476107  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:38.476445  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:38.975981  941476 type.go:168] "Request Body" body=""
	I1213 10:31:38.976069  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:38.976409  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:38.976469  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:39.476151  941476 type.go:168] "Request Body" body=""
	I1213 10:31:39.476225  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:39.476508  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:39.976053  941476 type.go:168] "Request Body" body=""
	I1213 10:31:39.976130  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:39.976448  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:40.476042  941476 type.go:168] "Request Body" body=""
	I1213 10:31:40.476119  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:40.476446  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:40.976091  941476 type.go:168] "Request Body" body=""
	I1213 10:31:40.976170  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:40.976430  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:41.476048  941476 type.go:168] "Request Body" body=""
	I1213 10:31:41.476125  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:41.476626  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:41.476675  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:41.976630  941476 type.go:168] "Request Body" body=""
	I1213 10:31:41.976743  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:41.977553  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:42.475972  941476 type.go:168] "Request Body" body=""
	I1213 10:31:42.476061  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:42.476364  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:42.976014  941476 type.go:168] "Request Body" body=""
	I1213 10:31:42.976100  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:42.976440  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:43.476024  941476 type.go:168] "Request Body" body=""
	I1213 10:31:43.476107  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:43.476429  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:43.975985  941476 type.go:168] "Request Body" body=""
	I1213 10:31:43.976054  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:43.976344  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:43.976397  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:44.476016  941476 type.go:168] "Request Body" body=""
	I1213 10:31:44.476093  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:44.476411  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:44.976030  941476 type.go:168] "Request Body" body=""
	I1213 10:31:44.976151  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:44.976503  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:45.476045  941476 type.go:168] "Request Body" body=""
	I1213 10:31:45.476120  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:45.476386  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:45.976018  941476 type.go:168] "Request Body" body=""
	I1213 10:31:45.976092  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:45.976393  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:45.976440  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:46.476013  941476 type.go:168] "Request Body" body=""
	I1213 10:31:46.476094  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:46.476429  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:46.975976  941476 type.go:168] "Request Body" body=""
	I1213 10:31:46.976048  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:46.976402  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:47.476027  941476 type.go:168] "Request Body" body=""
	I1213 10:31:47.476109  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:47.476419  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:47.976020  941476 type.go:168] "Request Body" body=""
	I1213 10:31:47.976095  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:47.976422  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:47.976480  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:48.476004  941476 type.go:168] "Request Body" body=""
	I1213 10:31:48.476083  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:48.476391  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:48.976026  941476 type.go:168] "Request Body" body=""
	I1213 10:31:48.976109  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:48.976439  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:49.476029  941476 type.go:168] "Request Body" body=""
	I1213 10:31:49.476115  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:49.476450  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:49.976130  941476 type.go:168] "Request Body" body=""
	I1213 10:31:49.976202  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:49.976477  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:49.976519  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:50.476169  941476 type.go:168] "Request Body" body=""
	I1213 10:31:50.476246  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:50.476586  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:50.976287  941476 type.go:168] "Request Body" body=""
	I1213 10:31:50.976360  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:50.976729  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:51.476495  941476 type.go:168] "Request Body" body=""
	I1213 10:31:51.476574  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:51.476839  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:51.976777  941476 type.go:168] "Request Body" body=""
	I1213 10:31:51.976892  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:51.977255  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:51.977312  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:52.476019  941476 type.go:168] "Request Body" body=""
	I1213 10:31:52.476115  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:52.476505  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:52.975986  941476 type.go:168] "Request Body" body=""
	I1213 10:31:52.976066  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:52.976377  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:53.476003  941476 type.go:168] "Request Body" body=""
	I1213 10:31:53.476081  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:53.476419  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:53.976122  941476 type.go:168] "Request Body" body=""
	I1213 10:31:53.976204  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:53.976539  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:54.476283  941476 type.go:168] "Request Body" body=""
	I1213 10:31:54.476358  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:54.476609  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:54.476652  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:54.976007  941476 type.go:168] "Request Body" body=""
	I1213 10:31:54.976081  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:54.976403  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:55.476020  941476 type.go:168] "Request Body" body=""
	I1213 10:31:55.476101  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:55.476465  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:55.976175  941476 type.go:168] "Request Body" body=""
	I1213 10:31:55.976246  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:55.976517  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:56.476006  941476 type.go:168] "Request Body" body=""
	I1213 10:31:56.476086  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:56.476452  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:56.976011  941476 type.go:168] "Request Body" body=""
	I1213 10:31:56.976090  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:56.976453  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:56.976513  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:57.476145  941476 type.go:168] "Request Body" body=""
	I1213 10:31:57.476215  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:57.476478  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:57.976009  941476 type.go:168] "Request Body" body=""
	I1213 10:31:57.976085  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:57.976451  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:58.476036  941476 type.go:168] "Request Body" body=""
	I1213 10:31:58.476114  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:58.476420  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:58.976112  941476 type.go:168] "Request Body" body=""
	I1213 10:31:58.976184  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:58.976451  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:59.476021  941476 type.go:168] "Request Body" body=""
	I1213 10:31:59.476097  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:59.476444  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:59.476501  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:59.976030  941476 type.go:168] "Request Body" body=""
	I1213 10:31:59.976103  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:59.976445  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:00.476019  941476 type.go:168] "Request Body" body=""
	I1213 10:32:00.476100  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:00.476422  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:00.976042  941476 type.go:168] "Request Body" body=""
	I1213 10:32:00.976122  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:00.976457  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:01.476038  941476 type.go:168] "Request Body" body=""
	I1213 10:32:01.476135  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:01.476461  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:01.476525  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:01.976433  941476 type.go:168] "Request Body" body=""
	I1213 10:32:01.976500  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:01.976760  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:02.476646  941476 type.go:168] "Request Body" body=""
	I1213 10:32:02.476736  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:02.477125  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:02.976957  941476 type.go:168] "Request Body" body=""
	I1213 10:32:02.977037  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:02.977386  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:03.476003  941476 type.go:168] "Request Body" body=""
	I1213 10:32:03.476067  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:03.476327  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:03.976021  941476 type.go:168] "Request Body" body=""
	I1213 10:32:03.976099  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:03.976425  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:03.976487  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:04.476032  941476 type.go:168] "Request Body" body=""
	I1213 10:32:04.476118  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:04.476477  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:04.976189  941476 type.go:168] "Request Body" body=""
	I1213 10:32:04.976259  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:04.976524  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:05.476054  941476 type.go:168] "Request Body" body=""
	I1213 10:32:05.476131  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:05.476494  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:05.976279  941476 type.go:168] "Request Body" body=""
	I1213 10:32:05.976358  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:05.976703  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:05.976759  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:06.476417  941476 type.go:168] "Request Body" body=""
	I1213 10:32:06.476497  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:06.476760  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:06.976645  941476 type.go:168] "Request Body" body=""
	I1213 10:32:06.976724  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:06.977077  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:07.476899  941476 type.go:168] "Request Body" body=""
	I1213 10:32:07.476981  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:07.477364  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:07.976070  941476 type.go:168] "Request Body" body=""
	I1213 10:32:07.976148  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:07.976442  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:08.476070  941476 type.go:168] "Request Body" body=""
	I1213 10:32:08.476152  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:08.476469  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:08.476525  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:08.976049  941476 type.go:168] "Request Body" body=""
	I1213 10:32:08.976129  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:08.976453  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:09.475983  941476 type.go:168] "Request Body" body=""
	I1213 10:32:09.476056  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:09.476367  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:09.976056  941476 type.go:168] "Request Body" body=""
	I1213 10:32:09.976139  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:09.976488  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:10.476201  941476 type.go:168] "Request Body" body=""
	I1213 10:32:10.476278  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:10.476604  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:10.476662  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:10.975985  941476 type.go:168] "Request Body" body=""
	I1213 10:32:10.976066  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:10.976386  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:11.476030  941476 type.go:168] "Request Body" body=""
	I1213 10:32:11.476107  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:11.476435  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:11.976014  941476 type.go:168] "Request Body" body=""
	I1213 10:32:11.976091  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:11.976414  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:12.475989  941476 type.go:168] "Request Body" body=""
	I1213 10:32:12.476059  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:12.476328  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:12.976032  941476 type.go:168] "Request Body" body=""
	I1213 10:32:12.976113  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:12.976433  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:12.976487  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:13.476035  941476 type.go:168] "Request Body" body=""
	I1213 10:32:13.476108  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:13.476457  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:13.976139  941476 type.go:168] "Request Body" body=""
	I1213 10:32:13.976217  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:13.976477  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:14.476065  941476 type.go:168] "Request Body" body=""
	I1213 10:32:14.476149  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:14.476488  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:14.976200  941476 type.go:168] "Request Body" body=""
	I1213 10:32:14.976280  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:14.976630  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:14.976691  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:15.476331  941476 type.go:168] "Request Body" body=""
	I1213 10:32:15.476407  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:15.476718  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:15.976843  941476 type.go:168] "Request Body" body=""
	I1213 10:32:15.976916  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:15.977265  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:16.476944  941476 type.go:168] "Request Body" body=""
	I1213 10:32:16.477018  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:16.477394  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:16.976098  941476 type.go:168] "Request Body" body=""
	I1213 10:32:16.976173  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:16.976437  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:17.476036  941476 type.go:168] "Request Body" body=""
	I1213 10:32:17.476113  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:17.476455  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:17.476515  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:17.976191  941476 type.go:168] "Request Body" body=""
	I1213 10:32:17.976268  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:17.976582  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:18.475997  941476 type.go:168] "Request Body" body=""
	I1213 10:32:18.476079  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:18.476340  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:18.976113  941476 type.go:168] "Request Body" body=""
	I1213 10:32:18.976206  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:18.976563  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:19.476049  941476 type.go:168] "Request Body" body=""
	I1213 10:32:19.476129  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:19.476456  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:19.976098  941476 type.go:168] "Request Body" body=""
	I1213 10:32:19.976166  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:19.976467  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:19.976522  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:20.476043  941476 type.go:168] "Request Body" body=""
	I1213 10:32:20.476118  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:20.476441  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:20.976163  941476 type.go:168] "Request Body" body=""
	I1213 10:32:20.976242  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:20.976531  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:21.475975  941476 type.go:168] "Request Body" body=""
	I1213 10:32:21.476045  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:21.476354  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:21.976036  941476 type.go:168] "Request Body" body=""
	I1213 10:32:21.976111  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:21.976471  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:22.476157  941476 type.go:168] "Request Body" body=""
	I1213 10:32:22.476236  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:22.476595  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:22.476649  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:22.975989  941476 type.go:168] "Request Body" body=""
	I1213 10:32:22.976063  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:22.976350  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:23.476043  941476 type.go:168] "Request Body" body=""
	I1213 10:32:23.476117  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:23.476465  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:23.976206  941476 type.go:168] "Request Body" body=""
	I1213 10:32:23.976283  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:23.976637  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:24.475985  941476 type.go:168] "Request Body" body=""
	I1213 10:32:24.476065  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:24.476346  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:24.976054  941476 type.go:168] "Request Body" body=""
	I1213 10:32:24.976136  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:24.976464  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:24.976520  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:25.476178  941476 type.go:168] "Request Body" body=""
	I1213 10:32:25.476258  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:25.476612  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:25.976593  941476 type.go:168] "Request Body" body=""
	I1213 10:32:25.976662  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:25.976936  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:26.476747  941476 type.go:168] "Request Body" body=""
	I1213 10:32:26.476821  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:26.477090  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:26.975948  941476 type.go:168] "Request Body" body=""
	I1213 10:32:26.976024  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:26.976402  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:27.476084  941476 type.go:168] "Request Body" body=""
	I1213 10:32:27.476158  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:27.476433  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:27.476474  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:27.976004  941476 type.go:168] "Request Body" body=""
	I1213 10:32:27.976087  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:27.976410  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:28.476155  941476 type.go:168] "Request Body" body=""
	I1213 10:32:28.476244  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:28.476588  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:28.976255  941476 type.go:168] "Request Body" body=""
	I1213 10:32:28.976331  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:28.976594  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:29.476028  941476 type.go:168] "Request Body" body=""
	I1213 10:32:29.476107  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:29.476476  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:29.476531  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:29.976055  941476 type.go:168] "Request Body" body=""
	I1213 10:32:29.976132  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:29.976460  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:30.475985  941476 type.go:168] "Request Body" body=""
	I1213 10:32:30.476059  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:30.476378  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:30.976032  941476 type.go:168] "Request Body" body=""
	I1213 10:32:30.976108  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:30.976436  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:31.476042  941476 type.go:168] "Request Body" body=""
	I1213 10:32:31.476119  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:31.476446  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:31.976398  941476 type.go:168] "Request Body" body=""
	I1213 10:32:31.976466  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:31.976719  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:31.976758  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:32.476588  941476 type.go:168] "Request Body" body=""
	I1213 10:32:32.476670  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:32.477064  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:32.976842  941476 type.go:168] "Request Body" body=""
	I1213 10:32:32.976917  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:32.977255  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:33.475960  941476 type.go:168] "Request Body" body=""
	I1213 10:32:33.476032  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:33.476294  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:33.975991  941476 type.go:168] "Request Body" body=""
	I1213 10:32:33.976070  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:33.976448  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:34.476153  941476 type.go:168] "Request Body" body=""
	I1213 10:32:34.476241  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:34.476568  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:34.476624  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:34.976261  941476 type.go:168] "Request Body" body=""
	I1213 10:32:34.976336  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:34.976618  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:35.476037  941476 type.go:168] "Request Body" body=""
	I1213 10:32:35.476116  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:35.476453  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:35.976396  941476 type.go:168] "Request Body" body=""
	I1213 10:32:35.976472  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:35.976804  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:36.476554  941476 type.go:168] "Request Body" body=""
	I1213 10:32:36.476624  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:36.476895  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:36.476937  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:36.976884  941476 type.go:168] "Request Body" body=""
	I1213 10:32:36.976958  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:36.977293  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:37.476031  941476 type.go:168] "Request Body" body=""
	I1213 10:32:37.476114  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:37.476465  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:37.976004  941476 type.go:168] "Request Body" body=""
	I1213 10:32:37.976074  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:37.976340  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:38.476062  941476 type.go:168] "Request Body" body=""
	I1213 10:32:38.476138  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:38.476437  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:38.976005  941476 type.go:168] "Request Body" body=""
	I1213 10:32:38.976078  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:38.976403  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:38.976454  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:39.475982  941476 type.go:168] "Request Body" body=""
	I1213 10:32:39.476059  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:39.476428  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:39.976002  941476 type.go:168] "Request Body" body=""
	I1213 10:32:39.976082  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:39.976414  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:40.476038  941476 type.go:168] "Request Body" body=""
	I1213 10:32:40.476118  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:40.476462  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:40.976166  941476 type.go:168] "Request Body" body=""
	I1213 10:32:40.976245  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:40.976502  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:40.976544  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:41.476000  941476 type.go:168] "Request Body" body=""
	I1213 10:32:41.476073  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:41.476433  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:41.976208  941476 type.go:168] "Request Body" body=""
	I1213 10:32:41.976289  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:41.976643  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:42.475983  941476 type.go:168] "Request Body" body=""
	I1213 10:32:42.476059  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:42.476353  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:42.976069  941476 type.go:168] "Request Body" body=""
	I1213 10:32:42.976137  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:42.976430  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:43.476306  941476 type.go:168] "Request Body" body=""
	I1213 10:32:43.476396  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:43.476750  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:43.476809  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:43.976720  941476 type.go:168] "Request Body" body=""
	I1213 10:32:43.976798  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:43.977089  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:44.477009  941476 type.go:168] "Request Body" body=""
	I1213 10:32:44.477085  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:44.477386  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:44.976767  941476 type.go:168] "Request Body" body=""
	I1213 10:32:44.976848  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:44.977176  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:45.475924  941476 type.go:168] "Request Body" body=""
	I1213 10:32:45.476036  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:45.476370  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:45.975913  941476 type.go:168] "Request Body" body=""
	I1213 10:32:45.975984  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:45.976317  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:45.976387  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:46.476025  941476 type.go:168] "Request Body" body=""
	I1213 10:32:46.476110  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:46.476450  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:46.975972  941476 type.go:168] "Request Body" body=""
	I1213 10:32:46.976040  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:46.976351  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:47.476004  941476 type.go:168] "Request Body" body=""
	I1213 10:32:47.476136  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:47.476459  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:47.976017  941476 type.go:168] "Request Body" body=""
	I1213 10:32:47.976089  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:47.976421  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:47.976477  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:48.476128  941476 type.go:168] "Request Body" body=""
	I1213 10:32:48.476203  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:48.476459  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:48.976015  941476 type.go:168] "Request Body" body=""
	I1213 10:32:48.976089  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:48.976419  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:49.476032  941476 type.go:168] "Request Body" body=""
	I1213 10:32:49.476106  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:49.476423  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:49.975990  941476 type.go:168] "Request Body" body=""
	I1213 10:32:49.976065  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:49.976312  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:50.476026  941476 type.go:168] "Request Body" body=""
	I1213 10:32:50.476104  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:50.476430  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:50.476486  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:50.976049  941476 type.go:168] "Request Body" body=""
	I1213 10:32:50.976131  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:50.976481  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:51.476188  941476 type.go:168] "Request Body" body=""
	I1213 10:32:51.476259  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:51.476529  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:51.976428  941476 type.go:168] "Request Body" body=""
	I1213 10:32:51.976507  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:51.976844  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:52.476643  941476 type.go:168] "Request Body" body=""
	I1213 10:32:52.476721  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:52.477067  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:52.477124  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:52.976867  941476 type.go:168] "Request Body" body=""
	I1213 10:32:52.976936  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:52.977207  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:53.475946  941476 type.go:168] "Request Body" body=""
	I1213 10:32:53.476027  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:53.476328  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:53.975930  941476 type.go:168] "Request Body" body=""
	I1213 10:32:53.976034  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:53.976391  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:54.475960  941476 type.go:168] "Request Body" body=""
	I1213 10:32:54.476035  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:54.476297  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:54.975999  941476 type.go:168] "Request Body" body=""
	I1213 10:32:54.976070  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:54.976357  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:54.976407  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:55.476011  941476 type.go:168] "Request Body" body=""
	I1213 10:32:55.476101  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:55.476377  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:55.976254  941476 type.go:168] "Request Body" body=""
	I1213 10:32:55.976330  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:55.976613  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:56.476032  941476 type.go:168] "Request Body" body=""
	I1213 10:32:56.476109  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:56.476457  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:56.976037  941476 type.go:168] "Request Body" body=""
	I1213 10:32:56.976111  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:56.976434  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:56.976489  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:57.475980  941476 type.go:168] "Request Body" body=""
	I1213 10:32:57.476061  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:57.476382  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:57.976008  941476 type.go:168] "Request Body" body=""
	I1213 10:32:57.976084  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:57.976417  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:58.476035  941476 type.go:168] "Request Body" body=""
	I1213 10:32:58.476116  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:58.476441  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:58.975991  941476 type.go:168] "Request Body" body=""
	I1213 10:32:58.976067  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:58.976351  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:59.476097  941476 type.go:168] "Request Body" body=""
	I1213 10:32:59.476175  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:59.476508  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:59.476569  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:59.976006  941476 type.go:168] "Request Body" body=""
	I1213 10:32:59.976086  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:59.976416  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:00.476102  941476 type.go:168] "Request Body" body=""
	I1213 10:33:00.476181  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:00.476460  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:00.976047  941476 type.go:168] "Request Body" body=""
	I1213 10:33:00.976134  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:00.976487  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:01.476029  941476 type.go:168] "Request Body" body=""
	I1213 10:33:01.476105  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:01.476429  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:01.975971  941476 type.go:168] "Request Body" body=""
	I1213 10:33:01.976042  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:01.976355  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:01.976407  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:02.476023  941476 type.go:168] "Request Body" body=""
	I1213 10:33:02.476094  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:02.476438  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:02.976170  941476 type.go:168] "Request Body" body=""
	I1213 10:33:02.976252  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:02.976630  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:03.476323  941476 type.go:168] "Request Body" body=""
	I1213 10:33:03.476399  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:03.476657  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:03.976052  941476 type.go:168] "Request Body" body=""
	I1213 10:33:03.976134  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:03.976463  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:03.976518  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:04.476187  941476 type.go:168] "Request Body" body=""
	I1213 10:33:04.476262  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:04.476613  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:04.976299  941476 type.go:168] "Request Body" body=""
	I1213 10:33:04.976377  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:04.976641  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:05.476304  941476 type.go:168] "Request Body" body=""
	I1213 10:33:05.476380  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:05.476711  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:05.976815  941476 type.go:168] "Request Body" body=""
	I1213 10:33:05.976895  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:05.977239  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:05.977294  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:06.475975  941476 type.go:168] "Request Body" body=""
	I1213 10:33:06.476047  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:06.476308  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:06.976045  941476 type.go:168] "Request Body" body=""
	I1213 10:33:06.976148  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:06.976516  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:07.476071  941476 type.go:168] "Request Body" body=""
	I1213 10:33:07.476148  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:07.476544  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:07.976078  941476 type.go:168] "Request Body" body=""
	I1213 10:33:07.976149  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:07.976402  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:08.476027  941476 type.go:168] "Request Body" body=""
	I1213 10:33:08.476099  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:08.476433  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:08.476487  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:08.976023  941476 type.go:168] "Request Body" body=""
	I1213 10:33:08.976112  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:08.976462  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:09.476176  941476 type.go:168] "Request Body" body=""
	I1213 10:33:09.476251  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:09.476526  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:09.976025  941476 type.go:168] "Request Body" body=""
	I1213 10:33:09.976104  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:09.976463  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:10.476184  941476 type.go:168] "Request Body" body=""
	I1213 10:33:10.476271  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:10.476609  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:10.476665  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:10.975992  941476 type.go:168] "Request Body" body=""
	I1213 10:33:10.976076  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:10.976358  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:11.476054  941476 type.go:168] "Request Body" body=""
	I1213 10:33:11.476129  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:11.476473  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:11.976030  941476 type.go:168] "Request Body" body=""
	I1213 10:33:11.976106  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:11.976465  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:12.476140  941476 type.go:168] "Request Body" body=""
	I1213 10:33:12.476209  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:12.476469  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:12.976013  941476 type.go:168] "Request Body" body=""
	I1213 10:33:12.976099  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:12.976394  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:12.976444  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:13.476111  941476 type.go:168] "Request Body" body=""
	I1213 10:33:13.476187  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:13.476533  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:13.976215  941476 type.go:168] "Request Body" body=""
	I1213 10:33:13.976284  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:13.976554  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:14.476030  941476 type.go:168] "Request Body" body=""
	I1213 10:33:14.476112  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:14.476450  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:14.976164  941476 type.go:168] "Request Body" body=""
	I1213 10:33:14.976241  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:14.976581  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:14.976644  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:15.475977  941476 type.go:168] "Request Body" body=""
	I1213 10:33:15.476046  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:15.476298  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:15.975945  941476 type.go:168] "Request Body" body=""
	I1213 10:33:15.976032  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:15.976414  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:16.476144  941476 type.go:168] "Request Body" body=""
	I1213 10:33:16.476219  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:16.476559  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:16.976466  941476 type.go:168] "Request Body" body=""
	I1213 10:33:16.976541  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:16.976809  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:16.976860  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:17.476687  941476 type.go:168] "Request Body" body=""
	I1213 10:33:17.476761  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:17.477087  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:17.976932  941476 type.go:168] "Request Body" body=""
	I1213 10:33:17.977005  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:17.977321  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:18.476000  941476 type.go:168] "Request Body" body=""
	I1213 10:33:18.476076  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:18.476392  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:18.976021  941476 type.go:168] "Request Body" body=""
	I1213 10:33:18.976114  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:18.976472  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:19.476006  941476 type.go:168] "Request Body" body=""
	I1213 10:33:19.476090  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:19.476437  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:19.476492  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:19.975984  941476 type.go:168] "Request Body" body=""
	I1213 10:33:19.976061  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:19.976331  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:20.476028  941476 type.go:168] "Request Body" body=""
	I1213 10:33:20.476114  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:20.476446  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:20.976140  941476 type.go:168] "Request Body" body=""
	I1213 10:33:20.976215  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:20.976570  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:21.476259  941476 type.go:168] "Request Body" body=""
	I1213 10:33:21.476335  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:21.476598  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:21.476641  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:21.976641  941476 type.go:168] "Request Body" body=""
	I1213 10:33:21.976721  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:21.977055  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:22.476842  941476 type.go:168] "Request Body" body=""
	I1213 10:33:22.476921  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:22.477263  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:22.975958  941476 type.go:168] "Request Body" body=""
	I1213 10:33:22.976026  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:22.976279  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:23.476028  941476 type.go:168] "Request Body" body=""
	I1213 10:33:23.476115  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:23.476440  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:23.976154  941476 type.go:168] "Request Body" body=""
	I1213 10:33:23.976230  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:23.976599  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:23.976655  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:24.476302  941476 type.go:168] "Request Body" body=""
	I1213 10:33:24.476382  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:24.476643  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:24.976013  941476 type.go:168] "Request Body" body=""
	I1213 10:33:24.976088  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:24.976409  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:25.476125  941476 type.go:168] "Request Body" body=""
	I1213 10:33:25.476201  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:25.476538  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:25.976508  941476 type.go:168] "Request Body" body=""
	I1213 10:33:25.976580  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:25.976838  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:25.976879  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:26.476580  941476 type.go:168] "Request Body" body=""
	I1213 10:33:26.476662  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:26.476989  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:26.975922  941476 type.go:168] "Request Body" body=""
	I1213 10:33:26.976010  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:26.976354  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:27.476098  941476 type.go:168] "Request Body" body=""
	I1213 10:33:27.476184  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:27.476458  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:27.976019  941476 type.go:168] "Request Body" body=""
	I1213 10:33:27.976107  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:27.976466  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:28.476177  941476 type.go:168] "Request Body" body=""
	I1213 10:33:28.476256  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:28.476603  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:28.476659  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:28.975989  941476 type.go:168] "Request Body" body=""
	I1213 10:33:28.976063  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:28.976324  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:29.475991  941476 type.go:168] "Request Body" body=""
	I1213 10:33:29.476066  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:29.476404  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:29.975996  941476 type.go:168] "Request Body" body=""
	I1213 10:33:29.976077  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:29.976425  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:30.476099  941476 type.go:168] "Request Body" body=""
	I1213 10:33:30.476165  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:30.476425  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:30.976057  941476 type.go:168] "Request Body" body=""
	I1213 10:33:30.976139  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:30.976428  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:30.976479  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:31.476028  941476 type.go:168] "Request Body" body=""
	I1213 10:33:31.476102  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:31.476433  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:31.975996  941476 type.go:168] "Request Body" body=""
	I1213 10:33:31.976062  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:31.976317  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:32.476036  941476 type.go:168] "Request Body" body=""
	I1213 10:33:32.476123  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:32.476463  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:32.976156  941476 type.go:168] "Request Body" body=""
	I1213 10:33:32.976239  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:32.976579  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:32.976636  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:33.476103  941476 type.go:168] "Request Body" body=""
	I1213 10:33:33.476175  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:33.476433  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:33.976025  941476 type.go:168] "Request Body" body=""
	I1213 10:33:33.976098  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:33.976421  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:34.476116  941476 type.go:168] "Request Body" body=""
	I1213 10:33:34.476189  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:34.476493  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:34.976055  941476 type.go:168] "Request Body" body=""
	I1213 10:33:34.976123  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:34.976382  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:35.476023  941476 type.go:168] "Request Body" body=""
	I1213 10:33:35.476099  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:35.476443  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:35.476499  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:35.975939  941476 type.go:168] "Request Body" body=""
	I1213 10:33:35.976014  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:35.976367  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:36.475982  941476 type.go:168] "Request Body" body=""
	I1213 10:33:36.476086  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:36.476409  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:36.976046  941476 type.go:168] "Request Body" body=""
	I1213 10:33:36.976117  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:36.976443  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:37.476164  941476 type.go:168] "Request Body" body=""
	I1213 10:33:37.476242  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:37.476524  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:37.476575  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:37.976198  941476 type.go:168] "Request Body" body=""
	I1213 10:33:37.976275  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:37.976533  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:38.476039  941476 type.go:168] "Request Body" body=""
	I1213 10:33:38.476112  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:38.476422  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:38.976114  941476 type.go:168] "Request Body" body=""
	I1213 10:33:38.976199  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:38.976530  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:39.476088  941476 type.go:168] "Request Body" body=""
	I1213 10:33:39.476161  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:39.476422  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:39.976009  941476 type.go:168] "Request Body" body=""
	I1213 10:33:39.976084  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:39.976397  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:39.976449  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:40.476000  941476 type.go:168] "Request Body" body=""
	I1213 10:33:40.476076  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:40.476414  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:40.976095  941476 type.go:168] "Request Body" body=""
	I1213 10:33:40.976167  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:40.976436  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:41.476022  941476 type.go:168] "Request Body" body=""
	I1213 10:33:41.476094  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:41.476397  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:41.976013  941476 type.go:168] "Request Body" body=""
	I1213 10:33:41.976092  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:41.976658  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:41.976706  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:42.475980  941476 type.go:168] "Request Body" body=""
	I1213 10:33:42.476055  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:42.476675  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:42.976377  941476 type.go:168] "Request Body" body=""
	I1213 10:33:42.976455  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:42.976815  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:43.476621  941476 type.go:168] "Request Body" body=""
	I1213 10:33:43.476701  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:43.477037  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:43.976823  941476 type.go:168] "Request Body" body=""
	I1213 10:33:43.976888  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:43.977141  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:43.977181  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:44.476934  941476 type.go:168] "Request Body" body=""
	I1213 10:33:44.477006  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:44.477335  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:44.976009  941476 type.go:168] "Request Body" body=""
	I1213 10:33:44.976100  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:44.976470  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:45.476027  941476 type.go:168] "Request Body" body=""
	I1213 10:33:45.476103  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:45.476385  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:45.976244  941476 type.go:168] "Request Body" body=""
	I1213 10:33:45.976320  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:45.976638  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:46.476051  941476 type.go:168] "Request Body" body=""
	I1213 10:33:46.476134  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:46.476479  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:46.476535  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:46.975987  941476 type.go:168] "Request Body" body=""
	I1213 10:33:46.976061  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:46.976313  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:47.476031  941476 type.go:168] "Request Body" body=""
	I1213 10:33:47.476113  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:47.476457  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:47.976041  941476 type.go:168] "Request Body" body=""
	I1213 10:33:47.976125  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:47.976473  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:48.476166  941476 type.go:168] "Request Body" body=""
	I1213 10:33:48.476241  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:48.476522  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:48.476583  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:48.975991  941476 type.go:168] "Request Body" body=""
	I1213 10:33:48.976075  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:48.976407  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:49.476115  941476 type.go:168] "Request Body" body=""
	I1213 10:33:49.476190  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:49.476513  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:49.975984  941476 type.go:168] "Request Body" body=""
	I1213 10:33:49.976052  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:49.976304  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:50.476021  941476 type.go:168] "Request Body" body=""
	I1213 10:33:50.476105  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:50.476430  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:50.976125  941476 type.go:168] "Request Body" body=""
	I1213 10:33:50.976206  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:50.976556  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:50.976613  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:51.476129  941476 type.go:168] "Request Body" body=""
	I1213 10:33:51.476201  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:51.476471  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:51.976229  941476 type.go:168] "Request Body" body=""
	I1213 10:33:51.976307  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:51.976619  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:52.476357  941476 type.go:168] "Request Body" body=""
	I1213 10:33:52.476455  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:52.476789  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:52.976543  941476 type.go:168] "Request Body" body=""
	I1213 10:33:52.976619  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:52.976876  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:52.976919  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:53.476697  941476 type.go:168] "Request Body" body=""
	I1213 10:33:53.476776  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:53.477117  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:53.976881  941476 type.go:168] "Request Body" body=""
	I1213 10:33:53.976953  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:53.977282  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:54.476946  941476 type.go:168] "Request Body" body=""
	I1213 10:33:54.477041  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:54.477322  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:54.976021  941476 type.go:168] "Request Body" body=""
	I1213 10:33:54.976100  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:54.976426  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:55.476042  941476 type.go:168] "Request Body" body=""
	I1213 10:33:55.476124  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:55.476467  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:55.476548  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:55.976476  941476 type.go:168] "Request Body" body=""
	I1213 10:33:55.976544  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:55.976834  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:56.476663  941476 type.go:168] "Request Body" body=""
	I1213 10:33:56.476741  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:56.477071  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:56.975949  941476 type.go:168] "Request Body" body=""
	I1213 10:33:56.976040  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:56.976420  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:57.475988  941476 type.go:168] "Request Body" body=""
	I1213 10:33:57.476057  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:57.476315  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:57.976051  941476 type.go:168] "Request Body" body=""
	I1213 10:33:57.976129  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:57.976419  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:57.976467  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:58.476120  941476 type.go:168] "Request Body" body=""
	I1213 10:33:58.476204  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:58.476550  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:58.976099  941476 type.go:168] "Request Body" body=""
	I1213 10:33:58.976165  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:58.976418  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:59.476030  941476 type.go:168] "Request Body" body=""
	I1213 10:33:59.476110  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:59.476450  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:59.976134  941476 type.go:168] "Request Body" body=""
	I1213 10:33:59.976218  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:59.976654  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:59.976717  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:00.476365  941476 type.go:168] "Request Body" body=""
	I1213 10:34:00.476441  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:00.476723  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:00.976554  941476 type.go:168] "Request Body" body=""
	I1213 10:34:00.976626  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:00.976899  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:01.476692  941476 type.go:168] "Request Body" body=""
	I1213 10:34:01.476765  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:01.477095  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:01.976838  941476 type.go:168] "Request Body" body=""
	I1213 10:34:01.976916  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:01.977190  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:01.977235  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:02.475885  941476 type.go:168] "Request Body" body=""
	I1213 10:34:02.475972  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:02.476308  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:02.976032  941476 type.go:168] "Request Body" body=""
	I1213 10:34:02.976106  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:02.976439  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:03.476117  941476 type.go:168] "Request Body" body=""
	I1213 10:34:03.476185  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:03.476511  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:03.976078  941476 type.go:168] "Request Body" body=""
	I1213 10:34:03.976164  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:03.976510  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:04.476128  941476 type.go:168] "Request Body" body=""
	I1213 10:34:04.476208  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:04.476533  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:04.476591  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:04.975971  941476 type.go:168] "Request Body" body=""
	I1213 10:34:04.976047  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:04.976363  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:05.476003  941476 type.go:168] "Request Body" body=""
	I1213 10:34:05.476075  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:05.476405  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:05.976170  941476 type.go:168] "Request Body" body=""
	I1213 10:34:05.976243  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:05.976545  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:06.476105  941476 type.go:168] "Request Body" body=""
	I1213 10:34:06.476180  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:06.476517  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:06.976527  941476 type.go:168] "Request Body" body=""
	I1213 10:34:06.976615  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:06.976986  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:06.977056  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:07.476804  941476 type.go:168] "Request Body" body=""
	I1213 10:34:07.476894  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:07.477246  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:07.975926  941476 type.go:168] "Request Body" body=""
	I1213 10:34:07.975997  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:07.976254  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:08.476006  941476 type.go:168] "Request Body" body=""
	I1213 10:34:08.476102  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:08.476453  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:08.976173  941476 type.go:168] "Request Body" body=""
	I1213 10:34:08.976254  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:08.976538  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:09.476199  941476 type.go:168] "Request Body" body=""
	I1213 10:34:09.476277  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:09.476604  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:09.476655  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:09.976021  941476 type.go:168] "Request Body" body=""
	I1213 10:34:09.976097  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:09.976401  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:10.476158  941476 type.go:168] "Request Body" body=""
	I1213 10:34:10.476243  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:10.476583  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:10.975974  941476 type.go:168] "Request Body" body=""
	I1213 10:34:10.976050  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:10.976361  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:11.476040  941476 type.go:168] "Request Body" body=""
	I1213 10:34:11.476133  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:11.476485  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:11.976462  941476 type.go:168] "Request Body" body=""
	I1213 10:34:11.976535  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:11.976826  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:11.976874  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:12.476533  941476 type.go:168] "Request Body" body=""
	I1213 10:34:12.476615  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:12.476871  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:12.976701  941476 type.go:168] "Request Body" body=""
	I1213 10:34:12.976782  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:12.977103  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:13.476950  941476 type.go:168] "Request Body" body=""
	I1213 10:34:13.477040  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:13.477394  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:13.976070  941476 type.go:168] "Request Body" body=""
	I1213 10:34:13.976154  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:13.976430  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:14.476035  941476 type.go:168] "Request Body" body=""
	I1213 10:34:14.476112  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:14.476448  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:14.476506  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:14.976192  941476 type.go:168] "Request Body" body=""
	I1213 10:34:14.976290  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:14.976612  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:15.475988  941476 type.go:168] "Request Body" body=""
	I1213 10:34:15.476090  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:15.476371  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:15.975973  941476 type.go:168] "Request Body" body=""
	I1213 10:34:15.976053  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:15.976336  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:16.476054  941476 type.go:168] "Request Body" body=""
	I1213 10:34:16.476134  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:16.476457  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:16.976232  941476 type.go:168] "Request Body" body=""
	I1213 10:34:16.976307  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:16.976573  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:16.976615  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:17.476042  941476 type.go:168] "Request Body" body=""
	I1213 10:34:17.476118  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:17.476467  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:17.976182  941476 type.go:168] "Request Body" body=""
	I1213 10:34:17.976258  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:17.976609  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:18.476297  941476 type.go:168] "Request Body" body=""
	I1213 10:34:18.476413  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:18.476678  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:18.976046  941476 type.go:168] "Request Body" body=""
	I1213 10:34:18.976123  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:18.976446  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:19.476033  941476 type.go:168] "Request Body" body=""
	I1213 10:34:19.476117  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:19.476440  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:19.476499  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:19.976045  941476 type.go:168] "Request Body" body=""
	I1213 10:34:19.976129  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:19.976535  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:20.476021  941476 type.go:168] "Request Body" body=""
	I1213 10:34:20.476097  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:20.476428  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:20.976025  941476 type.go:168] "Request Body" body=""
	I1213 10:34:20.976107  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:20.976470  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:21.476149  941476 type.go:168] "Request Body" body=""
	I1213 10:34:21.476232  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:21.476535  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:21.476579  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:21.976488  941476 type.go:168] "Request Body" body=""
	I1213 10:34:21.976565  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:21.976917  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:22.476734  941476 type.go:168] "Request Body" body=""
	I1213 10:34:22.476814  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:22.477160  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:22.976929  941476 type.go:168] "Request Body" body=""
	I1213 10:34:22.977004  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:22.977264  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:23.475962  941476 type.go:168] "Request Body" body=""
	I1213 10:34:23.476043  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:23.476394  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:23.975992  941476 type.go:168] "Request Body" body=""
	I1213 10:34:23.976073  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:23.976409  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:23.976471  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:24.475994  941476 type.go:168] "Request Body" body=""
	I1213 10:34:24.476067  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:24.476343  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:24.976030  941476 type.go:168] "Request Body" body=""
	I1213 10:34:24.976113  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:24.976425  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:25.476120  941476 type.go:168] "Request Body" body=""
	I1213 10:34:25.476209  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:25.476597  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:25.976332  941476 type.go:168] "Request Body" body=""
	I1213 10:34:25.976407  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:25.976654  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:25.976698  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:26.476026  941476 type.go:168] "Request Body" body=""
	I1213 10:34:26.476112  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:26.476445  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:26.976015  941476 type.go:168] "Request Body" body=""
	I1213 10:34:26.976105  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:26.976489  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:27.476210  941476 type.go:168] "Request Body" body=""
	I1213 10:34:27.476283  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:27.476557  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:27.976225  941476 type.go:168] "Request Body" body=""
	I1213 10:34:27.976306  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:27.976615  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:28.476015  941476 type.go:168] "Request Body" body=""
	I1213 10:34:28.476091  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:28.476427  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:28.476486  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:28.976015  941476 type.go:168] "Request Body" body=""
	I1213 10:34:28.976082  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:28.976344  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:29.476028  941476 type.go:168] "Request Body" body=""
	I1213 10:34:29.476111  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:29.476510  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:29.976204  941476 type.go:168] "Request Body" body=""
	I1213 10:34:29.976284  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:29.976620  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:30.476184  941476 type.go:168] "Request Body" body=""
	I1213 10:34:30.476253  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:30.476523  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:30.476567  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:30.976028  941476 type.go:168] "Request Body" body=""
	I1213 10:34:30.976107  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:30.976466  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:31.476057  941476 type.go:168] "Request Body" body=""
	I1213 10:34:31.476134  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:31.476442  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:31.976251  941476 type.go:168] "Request Body" body=""
	I1213 10:34:31.976330  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:31.976592  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:32.476291  941476 type.go:168] "Request Body" body=""
	I1213 10:34:32.476378  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:32.476726  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:32.476797  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:32.976040  941476 type.go:168] "Request Body" body=""
	I1213 10:34:32.976118  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:32.976497  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:33.476816  941476 type.go:168] "Request Body" body=""
	I1213 10:34:33.476896  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:33.477256  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:33.975966  941476 type.go:168] "Request Body" body=""
	I1213 10:34:33.976050  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:33.976402  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:34.476115  941476 type.go:168] "Request Body" body=""
	I1213 10:34:34.476192  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:34.476542  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:34.976227  941476 type.go:168] "Request Body" body=""
	I1213 10:34:34.976305  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:34.976571  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:34.976613  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:35.476273  941476 type.go:168] "Request Body" body=""
	I1213 10:34:35.476350  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:35.476744  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:35.976574  941476 type.go:168] "Request Body" body=""
	I1213 10:34:35.976660  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:35.976987  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:36.476793  941476 type.go:168] "Request Body" body=""
	I1213 10:34:36.476879  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:36.477161  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:36.976010  941476 type.go:168] "Request Body" body=""
	I1213 10:34:36.976112  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:36.976494  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:37.476223  941476 type.go:168] "Request Body" body=""
	I1213 10:34:37.476305  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:37.476698  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:37.476756  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:37.976397  941476 type.go:168] "Request Body" body=""
	I1213 10:34:37.976468  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:37.976743  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:38.476018  941476 type.go:168] "Request Body" body=""
	I1213 10:34:38.476101  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:38.476460  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:38.976171  941476 type.go:168] "Request Body" body=""
	I1213 10:34:38.976253  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:38.976575  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:39.475986  941476 type.go:168] "Request Body" body=""
	I1213 10:34:39.476060  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:39.476387  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:39.976027  941476 type.go:168] "Request Body" body=""
	I1213 10:34:39.976100  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:39.976461  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:39.976525  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:40.476055  941476 type.go:168] "Request Body" body=""
	I1213 10:34:40.476137  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:40.476513  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:40.975990  941476 type.go:168] "Request Body" body=""
	I1213 10:34:40.976061  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:40.976330  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:41.476024  941476 type.go:168] "Request Body" body=""
	I1213 10:34:41.476103  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:41.476438  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:41.976017  941476 type.go:168] "Request Body" body=""
	I1213 10:34:41.976103  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:41.976445  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:42.476145  941476 type.go:168] "Request Body" body=""
	I1213 10:34:42.476214  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:42.476486  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:42.476531  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:42.976037  941476 type.go:168] "Request Body" body=""
	I1213 10:34:42.976110  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:42.976445  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:43.476157  941476 type.go:168] "Request Body" body=""
	I1213 10:34:43.476237  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:43.476565  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:43.975999  941476 type.go:168] "Request Body" body=""
	I1213 10:34:43.976386  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:43.976856  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:44.476021  941476 type.go:168] "Request Body" body=""
	I1213 10:34:44.476103  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:44.476481  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:44.476557  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:44.976279  941476 type.go:168] "Request Body" body=""
	I1213 10:34:44.976368  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:44.976729  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:45.476416  941476 type.go:168] "Request Body" body=""
	I1213 10:34:45.476491  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:45.476765  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:45.976781  941476 type.go:168] "Request Body" body=""
	I1213 10:34:45.976855  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:45.977208  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:46.475938  941476 type.go:168] "Request Body" body=""
	I1213 10:34:46.476018  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:46.476395  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:46.976009  941476 type.go:168] "Request Body" body=""
	I1213 10:34:46.976076  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:46.976328  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:46.976368  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:47.475981  941476 type.go:168] "Request Body" body=""
	I1213 10:34:47.476053  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:47.476668  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:47.976228  941476 type.go:168] "Request Body" body=""
	I1213 10:34:47.976301  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:47.976633  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:48.476320  941476 type.go:168] "Request Body" body=""
	I1213 10:34:48.476388  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:48.476671  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:48.976029  941476 type.go:168] "Request Body" body=""
	I1213 10:34:48.976104  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:48.976426  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:48.976483  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:49.476176  941476 type.go:168] "Request Body" body=""
	I1213 10:34:49.476256  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:49.476626  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:49.976322  941476 type.go:168] "Request Body" body=""
	I1213 10:34:49.976392  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:49.976656  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:50.476004  941476 type.go:168] "Request Body" body=""
	I1213 10:34:50.476079  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:50.476438  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:50.976173  941476 type.go:168] "Request Body" body=""
	I1213 10:34:50.976252  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:50.976562  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:50.976609  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:51.475985  941476 type.go:168] "Request Body" body=""
	I1213 10:34:51.476058  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:51.476380  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:51.976214  941476 type.go:168] "Request Body" body=""
	I1213 10:34:51.976292  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:51.976634  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:52.476361  941476 type.go:168] "Request Body" body=""
	I1213 10:34:52.476438  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:52.476777  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:52.976550  941476 type.go:168] "Request Body" body=""
	I1213 10:34:52.976620  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:52.976884  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:52.976928  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:53.476704  941476 type.go:168] "Request Body" body=""
	I1213 10:34:53.476789  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:53.477137  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:53.976929  941476 type.go:168] "Request Body" body=""
	I1213 10:34:53.977004  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:53.977333  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:54.476034  941476 type.go:168] "Request Body" body=""
	I1213 10:34:54.476106  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:54.476377  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:54.976023  941476 type.go:168] "Request Body" body=""
	I1213 10:34:54.976105  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:54.976454  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:55.476046  941476 type.go:168] "Request Body" body=""
	I1213 10:34:55.476127  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:55.476479  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:55.476535  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:55.976212  941476 type.go:168] "Request Body" body=""
	I1213 10:34:55.976283  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:55.976540  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:56.476033  941476 type.go:168] "Request Body" body=""
	I1213 10:34:56.476112  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:56.476472  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:56.976530  941476 type.go:168] "Request Body" body=""
	I1213 10:34:56.976612  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:56.977004  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:57.476807  941476 type.go:168] "Request Body" body=""
	I1213 10:34:57.476890  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:57.477154  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:57.477196  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:57.977018  941476 type.go:168] "Request Body" body=""
	I1213 10:34:57.977109  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:57.977446  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:58.476146  941476 type.go:168] "Request Body" body=""
	I1213 10:34:58.476225  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:58.476550  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:58.976270  941476 type.go:168] "Request Body" body=""
	I1213 10:34:58.976346  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:58.976611  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:59.476051  941476 type.go:168] "Request Body" body=""
	I1213 10:34:59.476143  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:59.476548  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:59.976128  941476 type.go:168] "Request Body" body=""
	I1213 10:34:59.976213  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:59.976516  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:59.976563  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:00.476032  941476 type.go:168] "Request Body" body=""
	I1213 10:35:00.476107  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:00.476542  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:00.976317  941476 type.go:168] "Request Body" body=""
	I1213 10:35:00.976411  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:00.976761  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:01.476609  941476 type.go:168] "Request Body" body=""
	I1213 10:35:01.476689  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:01.477045  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:01.976793  941476 type.go:168] "Request Body" body=""
	I1213 10:35:01.976872  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:01.977145  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:01.977189  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:02.476982  941476 type.go:168] "Request Body" body=""
	I1213 10:35:02.477061  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:02.477408  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:02.976099  941476 type.go:168] "Request Body" body=""
	I1213 10:35:02.976178  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:02.976550  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:03.476237  941476 type.go:168] "Request Body" body=""
	I1213 10:35:03.476319  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:03.476595  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:03.976292  941476 type.go:168] "Request Body" body=""
	I1213 10:35:03.976381  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:03.976725  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:04.476528  941476 type.go:168] "Request Body" body=""
	I1213 10:35:04.476603  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:04.476926  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:04.476983  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:04.976700  941476 type.go:168] "Request Body" body=""
	I1213 10:35:04.976771  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:04.977027  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:05.476835  941476 type.go:168] "Request Body" body=""
	I1213 10:35:05.476914  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:05.477258  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:05.976201  941476 type.go:168] "Request Body" body=""
	I1213 10:35:05.976279  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:05.976630  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:06.476362  941476 type.go:168] "Request Body" body=""
	I1213 10:35:06.476440  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:06.476705  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:06.976610  941476 type.go:168] "Request Body" body=""
	I1213 10:35:06.976688  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:06.977052  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:06.977113  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:07.476898  941476 type.go:168] "Request Body" body=""
	I1213 10:35:07.476978  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:07.477359  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:07.975992  941476 type.go:168] "Request Body" body=""
	I1213 10:35:07.976075  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:07.976399  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:08.476093  941476 type.go:168] "Request Body" body=""
	I1213 10:35:08.476179  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:08.476527  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:08.976239  941476 type.go:168] "Request Body" body=""
	I1213 10:35:08.976318  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:08.976631  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:09.475998  941476 type.go:168] "Request Body" body=""
	I1213 10:35:09.476070  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:09.476334  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:09.476377  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:09.976025  941476 type.go:168] "Request Body" body=""
	I1213 10:35:09.976103  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:09.976446  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:10.476153  941476 type.go:168] "Request Body" body=""
	I1213 10:35:10.476230  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:10.476565  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:10.976284  941476 type.go:168] "Request Body" body=""
	I1213 10:35:10.976359  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:10.976641  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:11.476331  941476 type.go:168] "Request Body" body=""
	I1213 10:35:11.476408  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:11.476754  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:11.476819  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:11.976620  941476 type.go:168] "Request Body" body=""
	I1213 10:35:11.976709  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:11.977042  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:12.476813  941476 type.go:168] "Request Body" body=""
	I1213 10:35:12.476885  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:12.477142  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:12.976929  941476 type.go:168] "Request Body" body=""
	I1213 10:35:12.977022  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:12.977398  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:13.476001  941476 type.go:168] "Request Body" body=""
	I1213 10:35:13.476080  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:13.476431  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:13.976122  941476 type.go:168] "Request Body" body=""
	I1213 10:35:13.976192  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:13.976457  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:13.976500  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:14.475989  941476 type.go:168] "Request Body" body=""
	I1213 10:35:14.476065  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:14.476409  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:14.976135  941476 type.go:168] "Request Body" body=""
	I1213 10:35:14.976241  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:14.976610  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:15.476299  941476 type.go:168] "Request Body" body=""
	I1213 10:35:15.476374  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:15.476636  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:15.976597  941476 type.go:168] "Request Body" body=""
	I1213 10:35:15.976678  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:15.977009  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:15.977062  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:16.476828  941476 type.go:168] "Request Body" body=""
	I1213 10:35:16.476909  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:16.477284  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:16.975983  941476 type.go:168] "Request Body" body=""
	I1213 10:35:16.976057  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:16.976412  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:17.476005  941476 type.go:168] "Request Body" body=""
	I1213 10:35:17.476082  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:17.476426  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:17.976147  941476 type.go:168] "Request Body" body=""
	I1213 10:35:17.976234  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:17.976566  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:18.476096  941476 type.go:168] "Request Body" body=""
	I1213 10:35:18.476172  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:18.476450  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:18.476495  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:18.976034  941476 type.go:168] "Request Body" body=""
	I1213 10:35:18.976113  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:18.976435  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:19.476137  941476 type.go:168] "Request Body" body=""
	I1213 10:35:19.476227  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:19.476564  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:19.976248  941476 type.go:168] "Request Body" body=""
	I1213 10:35:19.976327  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:19.976600  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:20.476028  941476 type.go:168] "Request Body" body=""
	I1213 10:35:20.476115  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:20.476474  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:20.476531  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:20.976196  941476 type.go:168] "Request Body" body=""
	I1213 10:35:20.976277  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:20.976613  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:21.476306  941476 type.go:168] "Request Body" body=""
	I1213 10:35:21.476385  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:21.476650  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:21.976568  941476 type.go:168] "Request Body" body=""
	I1213 10:35:21.976645  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:21.976977  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:22.476793  941476 type.go:168] "Request Body" body=""
	I1213 10:35:22.476870  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:22.477217  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:22.477279  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:22.975966  941476 type.go:168] "Request Body" body=""
	I1213 10:35:22.976040  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:22.976311  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:23.476036  941476 type.go:168] "Request Body" body=""
	I1213 10:35:23.476125  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:23.476480  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:23.976070  941476 type.go:168] "Request Body" body=""
	I1213 10:35:23.976153  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:23.976505  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:24.476197  941476 type.go:168] "Request Body" body=""
	I1213 10:35:24.476265  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:24.476534  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:24.976215  941476 type.go:168] "Request Body" body=""
	I1213 10:35:24.976288  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:24.976630  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:24.976686  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:25.476352  941476 type.go:168] "Request Body" body=""
	I1213 10:35:25.476428  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:25.476773  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:25.976631  941476 type.go:168] "Request Body" body=""
	I1213 10:35:25.976701  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:25.976974  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:26.476849  941476 type.go:168] "Request Body" body=""
	I1213 10:35:26.476924  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:26.477262  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:26.976050  941476 type.go:168] "Request Body" body=""
	I1213 10:35:26.976131  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:26.976463  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:27.475977  941476 type.go:168] "Request Body" body=""
	I1213 10:35:27.476053  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:27.476355  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:27.476414  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:27.975991  941476 type.go:168] "Request Body" body=""
	I1213 10:35:27.976070  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:27.976388  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:28.476128  941476 type.go:168] "Request Body" body=""
	I1213 10:35:28.476210  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:28.476540  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:28.975989  941476 type.go:168] "Request Body" body=""
	I1213 10:35:28.976066  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:28.976327  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:29.476011  941476 type.go:168] "Request Body" body=""
	I1213 10:35:29.476091  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:29.476427  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:29.476488  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:29.976030  941476 type.go:168] "Request Body" body=""
	I1213 10:35:29.976112  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:29.976433  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:30.476132  941476 type.go:168] "Request Body" body=""
	I1213 10:35:30.476208  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:30.476494  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:30.976178  941476 type.go:168] "Request Body" body=""
	I1213 10:35:30.976260  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:30.976576  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:31.476298  941476 type.go:168] "Request Body" body=""
	I1213 10:35:31.476371  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:31.476716  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:31.476774  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:31.976573  941476 type.go:168] "Request Body" body=""
	I1213 10:35:31.976645  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:31.976917  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:32.476713  941476 type.go:168] "Request Body" body=""
	I1213 10:35:32.476790  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:32.477195  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:32.975947  941476 type.go:168] "Request Body" body=""
	I1213 10:35:32.976021  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:32.976319  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:33.476001  941476 type.go:168] "Request Body" body=""
	I1213 10:35:33.476069  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:33.476324  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:33.976034  941476 type.go:168] "Request Body" body=""
	I1213 10:35:33.976113  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:33.976453  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:33.976512  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:34.476185  941476 type.go:168] "Request Body" body=""
	I1213 10:35:34.476263  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:34.476596  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:34.975980  941476 type.go:168] "Request Body" body=""
	I1213 10:35:34.976061  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:34.976361  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:35.476014  941476 type.go:168] "Request Body" body=""
	I1213 10:35:35.476110  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:35.476451  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:35.976934  941476 type.go:168] "Request Body" body=""
	I1213 10:35:35.977011  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:35.977366  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:35.977428  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:36.476062  941476 type.go:168] "Request Body" body=""
	I1213 10:35:36.476139  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:36.476417  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:36.976261  941476 type.go:168] "Request Body" body=""
	I1213 10:35:36.976334  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:36.976678  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:37.476392  941476 type.go:168] "Request Body" body=""
	I1213 10:35:37.476480  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:37.476822  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:37.976607  941476 type.go:168] "Request Body" body=""
	I1213 10:35:37.976691  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:37.976956  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:38.476714  941476 type.go:168] "Request Body" body=""
	I1213 10:35:38.476786  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:38.477099  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:38.477160  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:38.976964  941476 type.go:168] "Request Body" body=""
	I1213 10:35:38.977048  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:38.977472  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:39.476025  941476 type.go:168] "Request Body" body=""
	I1213 10:35:39.476097  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:39.476371  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:39.976024  941476 type.go:168] "Request Body" body=""
	I1213 10:35:39.976107  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:39.976494  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:40.476166  941476 type.go:168] "Request Body" body=""
	I1213 10:35:40.476250  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:40.476607  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:40.975981  941476 type.go:168] "Request Body" body=""
	I1213 10:35:40.976060  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:40.976331  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:40.976379  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:41.476023  941476 type.go:168] "Request Body" body=""
	I1213 10:35:41.476108  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:41.476426  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:41.976057  941476 type.go:168] "Request Body" body=""
	I1213 10:35:41.976134  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:41.976442  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:42.475983  941476 type.go:168] "Request Body" body=""
	I1213 10:35:42.476061  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:42.480099  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1213 10:35:42.976928  941476 type.go:168] "Request Body" body=""
	I1213 10:35:42.977007  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:42.977373  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:42.977438  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:43.476024  941476 type.go:168] "Request Body" body=""
	I1213 10:35:43.476136  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:43.476497  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:43.976064  941476 type.go:168] "Request Body" body=""
	I1213 10:35:43.976139  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:43.976405  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:44.476012  941476 type.go:168] "Request Body" body=""
	I1213 10:35:44.476110  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:44.476457  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:44.976181  941476 type.go:168] "Request Body" body=""
	I1213 10:35:44.976260  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:44.976576  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:45.476263  941476 type.go:168] "Request Body" body=""
	I1213 10:35:45.476338  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:45.476639  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:45.476717  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:45.976693  941476 type.go:168] "Request Body" body=""
	I1213 10:35:45.976776  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:45.977113  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:46.476938  941476 type.go:168] "Request Body" body=""
	I1213 10:35:46.477014  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:46.477384  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:46.975991  941476 type.go:168] "Request Body" body=""
	I1213 10:35:46.976091  941476 node_ready.go:38] duration metric: took 6m0.000294728s for node "functional-200955" to be "Ready" ...
	I1213 10:35:46.979089  941476 out.go:203] 
	W1213 10:35:46.981875  941476 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1213 10:35:46.981899  941476 out.go:285] * 
	* 
	W1213 10:35:46.984058  941476 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 10:35:46.987297  941476 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:676: failed to soft start minikube. args "out/minikube-linux-arm64 start -p functional-200955 --alsologtostderr -v=8": exit status 80
functional_test.go:678: soft start took 6m5.969016877s for "functional-200955" cluster.
I1213 10:35:47.520589  907484 config.go:182] Loaded profile config "functional-200955": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-200955
helpers_test.go:244: (dbg) docker inspect functional-200955:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8d53cd00da87a9082532fc43ffe108cc46c100c4d52d598de1abc5c03d05f0a2",
	        "Created": "2025-12-13T10:21:24.063231347Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 935996,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T10:21:24.120776444Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/8d53cd00da87a9082532fc43ffe108cc46c100c4d52d598de1abc5c03d05f0a2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8d53cd00da87a9082532fc43ffe108cc46c100c4d52d598de1abc5c03d05f0a2/hostname",
	        "HostsPath": "/var/lib/docker/containers/8d53cd00da87a9082532fc43ffe108cc46c100c4d52d598de1abc5c03d05f0a2/hosts",
	        "LogPath": "/var/lib/docker/containers/8d53cd00da87a9082532fc43ffe108cc46c100c4d52d598de1abc5c03d05f0a2/8d53cd00da87a9082532fc43ffe108cc46c100c4d52d598de1abc5c03d05f0a2-json.log",
	        "Name": "/functional-200955",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-200955:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-200955",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8d53cd00da87a9082532fc43ffe108cc46c100c4d52d598de1abc5c03d05f0a2",
	                "LowerDir": "/var/lib/docker/overlay2/88eeb10fcc1b499e31bc1f6077feaba1c1c5b1a510066e3c5f3c7fab1032a7a8-init/diff:/var/lib/docker/overlay2/ae644fe0cc2841f5eea1cee1fab5fa62406b5368ff2c4f1e7ca42815e94a37ad/diff",
	                "MergedDir": "/var/lib/docker/overlay2/88eeb10fcc1b499e31bc1f6077feaba1c1c5b1a510066e3c5f3c7fab1032a7a8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/88eeb10fcc1b499e31bc1f6077feaba1c1c5b1a510066e3c5f3c7fab1032a7a8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/88eeb10fcc1b499e31bc1f6077feaba1c1c5b1a510066e3c5f3c7fab1032a7a8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-200955",
	                "Source": "/var/lib/docker/volumes/functional-200955/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-200955",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-200955",
	                "name.minikube.sigs.k8s.io": "functional-200955",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "766cddaf684c9eda3444b59c94594c94772112ec8d9beb3bf9ab0dee27a031f7",
	            "SandboxKey": "/var/run/docker/netns/766cddaf684c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33523"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33524"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33527"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33525"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33526"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-200955": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "26:41:8f:b5:13:ba",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "cc1684d1fcbfd40cf35af7d1687322fe1e1f6c4d0d51bbc510daab317bab57d4",
	                    "EndpointID": "480d7cd674d03dbe8a8b029c866cc993844939c5b39aa63c9b0d9188a61c29a3",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-200955",
	                        "8d53cd00da87"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-200955 -n functional-200955
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-200955 -n functional-200955: exit status 2 (339.617133ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-200955 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p functional-200955 logs -n 25: (1.021038351s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                       ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-769798 ssh sudo cat /usr/share/ca-certificates/907484.pem                                                                              │ functional-769798 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │ 13 Dec 25 10:21 UTC │
	│ ssh            │ functional-769798 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                          │ functional-769798 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │ 13 Dec 25 10:21 UTC │
	│ ssh            │ functional-769798 ssh sudo cat /etc/test/nested/copy/907484/hosts                                                                                 │ functional-769798 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │ 13 Dec 25 10:21 UTC │
	│ ssh            │ functional-769798 ssh sudo cat /etc/ssl/certs/9074842.pem                                                                                         │ functional-769798 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │ 13 Dec 25 10:21 UTC │
	│ cp             │ functional-769798 cp testdata/cp-test.txt /home/docker/cp-test.txt                                                                                │ functional-769798 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │ 13 Dec 25 10:21 UTC │
	│ ssh            │ functional-769798 ssh sudo cat /usr/share/ca-certificates/9074842.pem                                                                             │ functional-769798 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │ 13 Dec 25 10:21 UTC │
	│ ssh            │ functional-769798 ssh -n functional-769798 sudo cat /home/docker/cp-test.txt                                                                      │ functional-769798 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │ 13 Dec 25 10:21 UTC │
	│ ssh            │ functional-769798 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                          │ functional-769798 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │ 13 Dec 25 10:21 UTC │
	│ cp             │ functional-769798 cp functional-769798:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1526269303/001/cp-test.txt                        │ functional-769798 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │ 13 Dec 25 10:21 UTC │
	│ ssh            │ functional-769798 ssh -n functional-769798 sudo cat /home/docker/cp-test.txt                                                                      │ functional-769798 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │ 13 Dec 25 10:21 UTC │
	│ cp             │ functional-769798 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                                         │ functional-769798 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │ 13 Dec 25 10:21 UTC │
	│ image          │ functional-769798 image ls --format short --alsologtostderr                                                                                       │ functional-769798 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │ 13 Dec 25 10:21 UTC │
	│ ssh            │ functional-769798 ssh -n functional-769798 sudo cat /tmp/does/not/exist/cp-test.txt                                                               │ functional-769798 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │ 13 Dec 25 10:21 UTC │
	│ image          │ functional-769798 image ls --format yaml --alsologtostderr                                                                                        │ functional-769798 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │ 13 Dec 25 10:21 UTC │
	│ ssh            │ functional-769798 ssh pgrep buildkitd                                                                                                             │ functional-769798 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │                     │
	│ image          │ functional-769798 image ls --format json --alsologtostderr                                                                                        │ functional-769798 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │ 13 Dec 25 10:21 UTC │
	│ image          │ functional-769798 image build -t localhost/my-image:functional-769798 testdata/build --alsologtostderr                                            │ functional-769798 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │ 13 Dec 25 10:21 UTC │
	│ image          │ functional-769798 image ls --format table --alsologtostderr                                                                                       │ functional-769798 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │ 13 Dec 25 10:21 UTC │
	│ update-context │ functional-769798 update-context --alsologtostderr -v=2                                                                                           │ functional-769798 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │ 13 Dec 25 10:21 UTC │
	│ update-context │ functional-769798 update-context --alsologtostderr -v=2                                                                                           │ functional-769798 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │ 13 Dec 25 10:21 UTC │
	│ update-context │ functional-769798 update-context --alsologtostderr -v=2                                                                                           │ functional-769798 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │ 13 Dec 25 10:21 UTC │
	│ image          │ functional-769798 image ls                                                                                                                        │ functional-769798 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │ 13 Dec 25 10:21 UTC │
	│ delete         │ -p functional-769798                                                                                                                              │ functional-769798 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │ 13 Dec 25 10:21 UTC │
	│ start          │ -p functional-200955 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │                     │
	│ start          │ -p functional-200955 --alsologtostderr -v=8                                                                                                       │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:29 UTC │                     │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 10:29:41
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 10:29:41.597851  941476 out.go:360] Setting OutFile to fd 1 ...
	I1213 10:29:41.597968  941476 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:29:41.597980  941476 out.go:374] Setting ErrFile to fd 2...
	I1213 10:29:41.597985  941476 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:29:41.598264  941476 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-904040/.minikube/bin
	I1213 10:29:41.598640  941476 out.go:368] Setting JSON to false
	I1213 10:29:41.599496  941476 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":18731,"bootTime":1765603051,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1213 10:29:41.599570  941476 start.go:143] virtualization:  
	I1213 10:29:41.603284  941476 out.go:179] * [functional-200955] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 10:29:41.606132  941476 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 10:29:41.606240  941476 notify.go:221] Checking for updates...
	I1213 10:29:41.611909  941476 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 10:29:41.614766  941476 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22128-904040/kubeconfig
	I1213 10:29:41.617588  941476 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-904040/.minikube
	I1213 10:29:41.620495  941476 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 10:29:41.623575  941476 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 10:29:41.626951  941476 config.go:182] Loaded profile config "functional-200955": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 10:29:41.627063  941476 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 10:29:41.660528  941476 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 10:29:41.660648  941476 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:29:41.716071  941476 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 10:29:41.706597811 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:29:41.716181  941476 docker.go:319] overlay module found
	I1213 10:29:41.719241  941476 out.go:179] * Using the docker driver based on existing profile
	I1213 10:29:41.721997  941476 start.go:309] selected driver: docker
	I1213 10:29:41.722027  941476 start.go:927] validating driver "docker" against &{Name:functional-200955 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-200955 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:29:41.722127  941476 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 10:29:41.722252  941476 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:29:41.778165  941476 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 10:29:41.768783539 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:29:41.778600  941476 cni.go:84] Creating CNI manager for ""
	I1213 10:29:41.778650  941476 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 10:29:41.778703  941476 start.go:353] cluster config:
	{Name:functional-200955 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-200955 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:29:41.781806  941476 out.go:179] * Starting "functional-200955" primary control-plane node in "functional-200955" cluster
	I1213 10:29:41.784501  941476 cache.go:134] Beginning downloading kic base image for docker with crio
	I1213 10:29:41.787625  941476 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 10:29:41.790577  941476 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 10:29:41.790637  941476 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22128-904040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1213 10:29:41.790650  941476 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 10:29:41.790656  941476 cache.go:65] Caching tarball of preloaded images
	I1213 10:29:41.790739  941476 preload.go:238] Found /home/jenkins/minikube-integration/22128-904040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1213 10:29:41.790750  941476 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1213 10:29:41.790859  941476 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/config.json ...
	I1213 10:29:41.809947  941476 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 10:29:41.809969  941476 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 10:29:41.809989  941476 cache.go:243] Successfully downloaded all kic artifacts
	I1213 10:29:41.810023  941476 start.go:360] acquireMachinesLock for functional-200955: {Name:mkc5e96275d9db4dc69c44a1e3c60b6575a1e73a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 10:29:41.810091  941476 start.go:364] duration metric: took 45.924µs to acquireMachinesLock for "functional-200955"
	I1213 10:29:41.810115  941476 start.go:96] Skipping create...Using existing machine configuration
	I1213 10:29:41.810124  941476 fix.go:54] fixHost starting: 
	I1213 10:29:41.810397  941476 cli_runner.go:164] Run: docker container inspect functional-200955 --format={{.State.Status}}
	I1213 10:29:41.827321  941476 fix.go:112] recreateIfNeeded on functional-200955: state=Running err=<nil>
	W1213 10:29:41.827351  941476 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 10:29:41.830448  941476 out.go:252] * Updating the running docker "functional-200955" container ...
	I1213 10:29:41.830480  941476 machine.go:94] provisionDockerMachine start ...
	I1213 10:29:41.830562  941476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-200955
	I1213 10:29:41.846863  941476 main.go:143] libmachine: Using SSH client type: native
	I1213 10:29:41.847197  941476 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33523 <nil> <nil>}
	I1213 10:29:41.847214  941476 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 10:29:41.996943  941476 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-200955
	
	I1213 10:29:41.996971  941476 ubuntu.go:182] provisioning hostname "functional-200955"
	I1213 10:29:41.997042  941476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-200955
	I1213 10:29:42.018825  941476 main.go:143] libmachine: Using SSH client type: native
	I1213 10:29:42.019169  941476 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33523 <nil> <nil>}
	I1213 10:29:42.019192  941476 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-200955 && echo "functional-200955" | sudo tee /etc/hostname
	I1213 10:29:42.186347  941476 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-200955
	
	I1213 10:29:42.186459  941476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-200955
	I1213 10:29:42.209314  941476 main.go:143] libmachine: Using SSH client type: native
	I1213 10:29:42.209694  941476 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33523 <nil> <nil>}
	I1213 10:29:42.209712  941476 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-200955' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-200955/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-200955' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 10:29:42.370026  941476 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 10:29:42.370125  941476 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22128-904040/.minikube CaCertPath:/home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22128-904040/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22128-904040/.minikube}
	I1213 10:29:42.370174  941476 ubuntu.go:190] setting up certificates
	I1213 10:29:42.370200  941476 provision.go:84] configureAuth start
	I1213 10:29:42.370268  941476 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-200955
	I1213 10:29:42.388638  941476 provision.go:143] copyHostCerts
	I1213 10:29:42.388684  941476 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22128-904040/.minikube/ca.pem
	I1213 10:29:42.388728  941476 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-904040/.minikube/ca.pem, removing ...
	I1213 10:29:42.388739  941476 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-904040/.minikube/ca.pem
	I1213 10:29:42.388819  941476 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22128-904040/.minikube/ca.pem (1082 bytes)
	I1213 10:29:42.388924  941476 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22128-904040/.minikube/cert.pem
	I1213 10:29:42.388947  941476 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-904040/.minikube/cert.pem, removing ...
	I1213 10:29:42.388956  941476 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-904040/.minikube/cert.pem
	I1213 10:29:42.388985  941476 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22128-904040/.minikube/cert.pem (1123 bytes)
	I1213 10:29:42.389034  941476 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22128-904040/.minikube/key.pem
	I1213 10:29:42.389056  941476 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-904040/.minikube/key.pem, removing ...
	I1213 10:29:42.389064  941476 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-904040/.minikube/key.pem
	I1213 10:29:42.389093  941476 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22128-904040/.minikube/key.pem (1675 bytes)
	I1213 10:29:42.389148  941476 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22128-904040/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca-key.pem org=jenkins.functional-200955 san=[127.0.0.1 192.168.49.2 functional-200955 localhost minikube]
	I1213 10:29:42.553052  941476 provision.go:177] copyRemoteCerts
	I1213 10:29:42.553125  941476 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 10:29:42.553174  941476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-200955
	I1213 10:29:42.571937  941476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33523 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/functional-200955/id_rsa Username:docker}
	I1213 10:29:42.681380  941476 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1213 10:29:42.681440  941476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 10:29:42.698297  941476 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-904040/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1213 10:29:42.698381  941476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 10:29:42.715245  941476 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-904040/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1213 10:29:42.715360  941476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 10:29:42.732152  941476 provision.go:87] duration metric: took 361.926272ms to configureAuth
	I1213 10:29:42.732184  941476 ubuntu.go:206] setting minikube options for container-runtime
	I1213 10:29:42.732358  941476 config.go:182] Loaded profile config "functional-200955": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 10:29:42.732458  941476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-200955
	I1213 10:29:42.749290  941476 main.go:143] libmachine: Using SSH client type: native
	I1213 10:29:42.749620  941476 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33523 <nil> <nil>}
	I1213 10:29:42.749643  941476 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 10:29:43.093593  941476 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 10:29:43.093619  941476 machine.go:97] duration metric: took 1.263130563s to provisionDockerMachine
	I1213 10:29:43.093630  941476 start.go:293] postStartSetup for "functional-200955" (driver="docker")
	I1213 10:29:43.093643  941476 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 10:29:43.093703  941476 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 10:29:43.093752  941476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-200955
	I1213 10:29:43.110551  941476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33523 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/functional-200955/id_rsa Username:docker}
	I1213 10:29:43.213067  941476 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 10:29:43.216076  941476 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1213 10:29:43.216096  941476 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1213 10:29:43.216102  941476 command_runner.go:130] > VERSION_ID="12"
	I1213 10:29:43.216108  941476 command_runner.go:130] > VERSION="12 (bookworm)"
	I1213 10:29:43.216112  941476 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1213 10:29:43.216116  941476 command_runner.go:130] > ID=debian
	I1213 10:29:43.216121  941476 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1213 10:29:43.216125  941476 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1213 10:29:43.216147  941476 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1213 10:29:43.216196  941476 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 10:29:43.216219  941476 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 10:29:43.216231  941476 filesync.go:126] Scanning /home/jenkins/minikube-integration/22128-904040/.minikube/addons for local assets ...
	I1213 10:29:43.216286  941476 filesync.go:126] Scanning /home/jenkins/minikube-integration/22128-904040/.minikube/files for local assets ...
	I1213 10:29:43.216365  941476 filesync.go:149] local asset: /home/jenkins/minikube-integration/22128-904040/.minikube/files/etc/ssl/certs/9074842.pem -> 9074842.pem in /etc/ssl/certs
	I1213 10:29:43.216375  941476 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-904040/.minikube/files/etc/ssl/certs/9074842.pem -> /etc/ssl/certs/9074842.pem
	I1213 10:29:43.216452  941476 filesync.go:149] local asset: /home/jenkins/minikube-integration/22128-904040/.minikube/files/etc/test/nested/copy/907484/hosts -> hosts in /etc/test/nested/copy/907484
	I1213 10:29:43.216461  941476 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-904040/.minikube/files/etc/test/nested/copy/907484/hosts -> /etc/test/nested/copy/907484/hosts
	I1213 10:29:43.216512  941476 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/907484
	I1213 10:29:43.223706  941476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/files/etc/ssl/certs/9074842.pem --> /etc/ssl/certs/9074842.pem (1708 bytes)
	I1213 10:29:43.242619  941476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/files/etc/test/nested/copy/907484/hosts --> /etc/test/nested/copy/907484/hosts (40 bytes)
	I1213 10:29:43.261652  941476 start.go:296] duration metric: took 168.007176ms for postStartSetup
	I1213 10:29:43.261748  941476 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 10:29:43.261797  941476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-200955
	I1213 10:29:43.278068  941476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33523 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/functional-200955/id_rsa Username:docker}
	I1213 10:29:43.377852  941476 command_runner.go:130] > 19%
	I1213 10:29:43.378272  941476 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 10:29:43.382521  941476 command_runner.go:130] > 159G
	I1213 10:29:43.382892  941476 fix.go:56] duration metric: took 1.572759496s for fixHost
	I1213 10:29:43.382913  941476 start.go:83] releasing machines lock for "functional-200955", held for 1.572809064s
	I1213 10:29:43.382984  941476 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-200955
	I1213 10:29:43.399315  941476 ssh_runner.go:195] Run: cat /version.json
	I1213 10:29:43.399334  941476 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 10:29:43.399371  941476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-200955
	I1213 10:29:43.399397  941476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-200955
	I1213 10:29:43.423081  941476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33523 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/functional-200955/id_rsa Username:docker}
	I1213 10:29:43.424445  941476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33523 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/functional-200955/id_rsa Username:docker}
	I1213 10:29:43.612877  941476 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1213 10:29:43.615557  941476 command_runner.go:130] > {"iso_version": "v1.37.0-1765151505-21409", "kicbase_version": "v0.0.48-1765275396-22083", "minikube_version": "v1.37.0", "commit": "9f3959633d311997d75aab86f8ff840f224c6486"}
	I1213 10:29:43.615725  941476 ssh_runner.go:195] Run: systemctl --version
	I1213 10:29:43.621711  941476 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1213 10:29:43.621746  941476 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1213 10:29:43.622124  941476 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 10:29:43.667216  941476 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1213 10:29:43.671902  941476 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1213 10:29:43.672160  941476 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 10:29:43.672241  941476 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 10:29:43.679969  941476 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 10:29:43.679994  941476 start.go:496] detecting cgroup driver to use...
	I1213 10:29:43.680025  941476 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 10:29:43.680082  941476 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 10:29:43.694816  941476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 10:29:43.708840  941476 docker.go:218] disabling cri-docker service (if available) ...
	I1213 10:29:43.708902  941476 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 10:29:43.727390  941476 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 10:29:43.741194  941476 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 10:29:43.853170  941476 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 10:29:43.965117  941476 docker.go:234] disabling docker service ...
	I1213 10:29:43.965193  941476 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 10:29:43.981069  941476 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 10:29:43.993651  941476 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 10:29:44.106510  941476 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 10:29:44.230950  941476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 10:29:44.243823  941476 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 10:29:44.258241  941476 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1213 10:29:44.259524  941476 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 10:29:44.259625  941476 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:29:44.267965  941476 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 10:29:44.268046  941476 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:29:44.277059  941476 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:29:44.285643  941476 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:29:44.295522  941476 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 10:29:44.303650  941476 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:29:44.312274  941476 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:29:44.320905  941476 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:29:44.329531  941476 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 10:29:44.336129  941476 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1213 10:29:44.337017  941476 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 10:29:44.344665  941476 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:29:44.479199  941476 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 10:29:44.656815  941476 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 10:29:44.656943  941476 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 10:29:44.660542  941476 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1213 10:29:44.660573  941476 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1213 10:29:44.660581  941476 command_runner.go:130] > Device: 0,72	Inode: 1640        Links: 1
	I1213 10:29:44.660588  941476 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1213 10:29:44.660594  941476 command_runner.go:130] > Access: 2025-12-13 10:29:44.589977594 +0000
	I1213 10:29:44.660602  941476 command_runner.go:130] > Modify: 2025-12-13 10:29:44.589977594 +0000
	I1213 10:29:44.660608  941476 command_runner.go:130] > Change: 2025-12-13 10:29:44.589977594 +0000
	I1213 10:29:44.660615  941476 command_runner.go:130] >  Birth: -
	I1213 10:29:44.660643  941476 start.go:564] Will wait 60s for crictl version
	I1213 10:29:44.660697  941476 ssh_runner.go:195] Run: which crictl
	I1213 10:29:44.664032  941476 command_runner.go:130] > /usr/local/bin/crictl
	I1213 10:29:44.664157  941476 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 10:29:44.686934  941476 command_runner.go:130] > Version:  0.1.0
	I1213 10:29:44.686958  941476 command_runner.go:130] > RuntimeName:  cri-o
	I1213 10:29:44.686965  941476 command_runner.go:130] > RuntimeVersion:  1.34.3
	I1213 10:29:44.686970  941476 command_runner.go:130] > RuntimeApiVersion:  v1
	I1213 10:29:44.687007  941476 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1213 10:29:44.687101  941476 ssh_runner.go:195] Run: crio --version
	I1213 10:29:44.715374  941476 command_runner.go:130] > crio version 1.34.3
	I1213 10:29:44.715400  941476 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1213 10:29:44.715407  941476 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1213 10:29:44.715412  941476 command_runner.go:130] >    GitTreeState:   dirty
	I1213 10:29:44.715417  941476 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1213 10:29:44.715422  941476 command_runner.go:130] >    GoVersion:      go1.24.6
	I1213 10:29:44.715435  941476 command_runner.go:130] >    Compiler:       gc
	I1213 10:29:44.715442  941476 command_runner.go:130] >    Platform:       linux/arm64
	I1213 10:29:44.715446  941476 command_runner.go:130] >    Linkmode:       static
	I1213 10:29:44.715453  941476 command_runner.go:130] >    BuildTags:
	I1213 10:29:44.715457  941476 command_runner.go:130] >      static
	I1213 10:29:44.715461  941476 command_runner.go:130] >      netgo
	I1213 10:29:44.715464  941476 command_runner.go:130] >      osusergo
	I1213 10:29:44.715476  941476 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1213 10:29:44.715480  941476 command_runner.go:130] >      seccomp
	I1213 10:29:44.715484  941476 command_runner.go:130] >      apparmor
	I1213 10:29:44.715492  941476 command_runner.go:130] >      selinux
	I1213 10:29:44.715496  941476 command_runner.go:130] >    LDFlags:          unknown
	I1213 10:29:44.715504  941476 command_runner.go:130] >    SeccompEnabled:   true
	I1213 10:29:44.715508  941476 command_runner.go:130] >    AppArmorEnabled:  false
	I1213 10:29:44.717596  941476 ssh_runner.go:195] Run: crio --version
	I1213 10:29:44.744267  941476 command_runner.go:130] > crio version 1.34.3
	I1213 10:29:44.744305  941476 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1213 10:29:44.744312  941476 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1213 10:29:44.744317  941476 command_runner.go:130] >    GitTreeState:   dirty
	I1213 10:29:44.744322  941476 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1213 10:29:44.744327  941476 command_runner.go:130] >    GoVersion:      go1.24.6
	I1213 10:29:44.744331  941476 command_runner.go:130] >    Compiler:       gc
	I1213 10:29:44.744337  941476 command_runner.go:130] >    Platform:       linux/arm64
	I1213 10:29:44.744341  941476 command_runner.go:130] >    Linkmode:       static
	I1213 10:29:44.744346  941476 command_runner.go:130] >    BuildTags:
	I1213 10:29:44.744350  941476 command_runner.go:130] >      static
	I1213 10:29:44.744376  941476 command_runner.go:130] >      netgo
	I1213 10:29:44.744385  941476 command_runner.go:130] >      osusergo
	I1213 10:29:44.744390  941476 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1213 10:29:44.744393  941476 command_runner.go:130] >      seccomp
	I1213 10:29:44.744397  941476 command_runner.go:130] >      apparmor
	I1213 10:29:44.744406  941476 command_runner.go:130] >      selinux
	I1213 10:29:44.744411  941476 command_runner.go:130] >    LDFlags:          unknown
	I1213 10:29:44.744419  941476 command_runner.go:130] >    SeccompEnabled:   true
	I1213 10:29:44.744424  941476 command_runner.go:130] >    AppArmorEnabled:  false
	I1213 10:29:44.751529  941476 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1213 10:29:44.754410  941476 cli_runner.go:164] Run: docker network inspect functional-200955 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 10:29:44.770603  941476 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1213 10:29:44.774419  941476 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1213 10:29:44.774622  941476 kubeadm.go:884] updating cluster {Name:functional-200955 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-200955 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 10:29:44.774752  941476 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 10:29:44.774840  941476 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 10:29:44.811833  941476 command_runner.go:130] > {
	I1213 10:29:44.811851  941476 command_runner.go:130] >   "images":  [
	I1213 10:29:44.811855  941476 command_runner.go:130] >     {
	I1213 10:29:44.811864  941476 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1213 10:29:44.811869  941476 command_runner.go:130] >       "repoTags":  [
	I1213 10:29:44.811875  941476 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1213 10:29:44.811879  941476 command_runner.go:130] >       ],
	I1213 10:29:44.811883  941476 command_runner.go:130] >       "repoDigests":  [
	I1213 10:29:44.811892  941476 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1213 10:29:44.811900  941476 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1213 10:29:44.811904  941476 command_runner.go:130] >       ],
	I1213 10:29:44.811908  941476 command_runner.go:130] >       "size":  "111333938",
	I1213 10:29:44.811912  941476 command_runner.go:130] >       "username":  "",
	I1213 10:29:44.811920  941476 command_runner.go:130] >       "pinned":  false
	I1213 10:29:44.811923  941476 command_runner.go:130] >     },
	I1213 10:29:44.811927  941476 command_runner.go:130] >     {
	I1213 10:29:44.811933  941476 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1213 10:29:44.811938  941476 command_runner.go:130] >       "repoTags":  [
	I1213 10:29:44.811944  941476 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1213 10:29:44.811947  941476 command_runner.go:130] >       ],
	I1213 10:29:44.811951  941476 command_runner.go:130] >       "repoDigests":  [
	I1213 10:29:44.811959  941476 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1213 10:29:44.811968  941476 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1213 10:29:44.811980  941476 command_runner.go:130] >       ],
	I1213 10:29:44.811984  941476 command_runner.go:130] >       "size":  "29037500",
	I1213 10:29:44.811988  941476 command_runner.go:130] >       "username":  "",
	I1213 10:29:44.811994  941476 command_runner.go:130] >       "pinned":  false
	I1213 10:29:44.811997  941476 command_runner.go:130] >     },
	I1213 10:29:44.812000  941476 command_runner.go:130] >     {
	I1213 10:29:44.812007  941476 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1213 10:29:44.812011  941476 command_runner.go:130] >       "repoTags":  [
	I1213 10:29:44.812017  941476 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1213 10:29:44.812020  941476 command_runner.go:130] >       ],
	I1213 10:29:44.812024  941476 command_runner.go:130] >       "repoDigests":  [
	I1213 10:29:44.812032  941476 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1213 10:29:44.812040  941476 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1213 10:29:44.812047  941476 command_runner.go:130] >       ],
	I1213 10:29:44.812051  941476 command_runner.go:130] >       "size":  "74491780",
	I1213 10:29:44.812056  941476 command_runner.go:130] >       "username":  "nonroot",
	I1213 10:29:44.812059  941476 command_runner.go:130] >       "pinned":  false
	I1213 10:29:44.812062  941476 command_runner.go:130] >     },
	I1213 10:29:44.812066  941476 command_runner.go:130] >     {
	I1213 10:29:44.812073  941476 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1213 10:29:44.812076  941476 command_runner.go:130] >       "repoTags":  [
	I1213 10:29:44.812081  941476 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1213 10:29:44.812085  941476 command_runner.go:130] >       ],
	I1213 10:29:44.812089  941476 command_runner.go:130] >       "repoDigests":  [
	I1213 10:29:44.812097  941476 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1213 10:29:44.812104  941476 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1213 10:29:44.812109  941476 command_runner.go:130] >       ],
	I1213 10:29:44.812113  941476 command_runner.go:130] >       "size":  "60857170",
	I1213 10:29:44.812116  941476 command_runner.go:130] >       "uid":  {
	I1213 10:29:44.812120  941476 command_runner.go:130] >         "value":  "0"
	I1213 10:29:44.812123  941476 command_runner.go:130] >       },
	I1213 10:29:44.812132  941476 command_runner.go:130] >       "username":  "",
	I1213 10:29:44.812136  941476 command_runner.go:130] >       "pinned":  false
	I1213 10:29:44.812143  941476 command_runner.go:130] >     },
	I1213 10:29:44.812146  941476 command_runner.go:130] >     {
	I1213 10:29:44.812152  941476 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1213 10:29:44.812156  941476 command_runner.go:130] >       "repoTags":  [
	I1213 10:29:44.812161  941476 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1213 10:29:44.812164  941476 command_runner.go:130] >       ],
	I1213 10:29:44.812168  941476 command_runner.go:130] >       "repoDigests":  [
	I1213 10:29:44.812176  941476 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1213 10:29:44.812184  941476 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1213 10:29:44.812187  941476 command_runner.go:130] >       ],
	I1213 10:29:44.812191  941476 command_runner.go:130] >       "size":  "84949999",
	I1213 10:29:44.812195  941476 command_runner.go:130] >       "uid":  {
	I1213 10:29:44.812198  941476 command_runner.go:130] >         "value":  "0"
	I1213 10:29:44.812201  941476 command_runner.go:130] >       },
	I1213 10:29:44.812204  941476 command_runner.go:130] >       "username":  "",
	I1213 10:29:44.812208  941476 command_runner.go:130] >       "pinned":  false
	I1213 10:29:44.812211  941476 command_runner.go:130] >     },
	I1213 10:29:44.812213  941476 command_runner.go:130] >     {
	I1213 10:29:44.812220  941476 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1213 10:29:44.812224  941476 command_runner.go:130] >       "repoTags":  [
	I1213 10:29:44.812230  941476 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1213 10:29:44.812233  941476 command_runner.go:130] >       ],
	I1213 10:29:44.812236  941476 command_runner.go:130] >       "repoDigests":  [
	I1213 10:29:44.812244  941476 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1213 10:29:44.812253  941476 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1213 10:29:44.812256  941476 command_runner.go:130] >       ],
	I1213 10:29:44.812259  941476 command_runner.go:130] >       "size":  "72170325",
	I1213 10:29:44.812263  941476 command_runner.go:130] >       "uid":  {
	I1213 10:29:44.812266  941476 command_runner.go:130] >         "value":  "0"
	I1213 10:29:44.812269  941476 command_runner.go:130] >       },
	I1213 10:29:44.812273  941476 command_runner.go:130] >       "username":  "",
	I1213 10:29:44.812277  941476 command_runner.go:130] >       "pinned":  false
	I1213 10:29:44.812280  941476 command_runner.go:130] >     },
	I1213 10:29:44.812286  941476 command_runner.go:130] >     {
	I1213 10:29:44.812293  941476 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1213 10:29:44.812296  941476 command_runner.go:130] >       "repoTags":  [
	I1213 10:29:44.812302  941476 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1213 10:29:44.812304  941476 command_runner.go:130] >       ],
	I1213 10:29:44.812308  941476 command_runner.go:130] >       "repoDigests":  [
	I1213 10:29:44.812316  941476 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1213 10:29:44.812323  941476 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1213 10:29:44.812326  941476 command_runner.go:130] >       ],
	I1213 10:29:44.812330  941476 command_runner.go:130] >       "size":  "74106775",
	I1213 10:29:44.812334  941476 command_runner.go:130] >       "username":  "",
	I1213 10:29:44.812337  941476 command_runner.go:130] >       "pinned":  false
	I1213 10:29:44.812340  941476 command_runner.go:130] >     },
	I1213 10:29:44.812343  941476 command_runner.go:130] >     {
	I1213 10:29:44.812349  941476 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1213 10:29:44.812353  941476 command_runner.go:130] >       "repoTags":  [
	I1213 10:29:44.812358  941476 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1213 10:29:44.812361  941476 command_runner.go:130] >       ],
	I1213 10:29:44.812364  941476 command_runner.go:130] >       "repoDigests":  [
	I1213 10:29:44.812372  941476 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1213 10:29:44.812390  941476 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1213 10:29:44.812393  941476 command_runner.go:130] >       ],
	I1213 10:29:44.812397  941476 command_runner.go:130] >       "size":  "49822549",
	I1213 10:29:44.812400  941476 command_runner.go:130] >       "uid":  {
	I1213 10:29:44.812405  941476 command_runner.go:130] >         "value":  "0"
	I1213 10:29:44.812408  941476 command_runner.go:130] >       },
	I1213 10:29:44.812412  941476 command_runner.go:130] >       "username":  "",
	I1213 10:29:44.812416  941476 command_runner.go:130] >       "pinned":  false
	I1213 10:29:44.812419  941476 command_runner.go:130] >     },
	I1213 10:29:44.812422  941476 command_runner.go:130] >     {
	I1213 10:29:44.812428  941476 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1213 10:29:44.812432  941476 command_runner.go:130] >       "repoTags":  [
	I1213 10:29:44.812436  941476 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1213 10:29:44.812442  941476 command_runner.go:130] >       ],
	I1213 10:29:44.812446  941476 command_runner.go:130] >       "repoDigests":  [
	I1213 10:29:44.812454  941476 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1213 10:29:44.812462  941476 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1213 10:29:44.812464  941476 command_runner.go:130] >       ],
	I1213 10:29:44.812468  941476 command_runner.go:130] >       "size":  "519884",
	I1213 10:29:44.812471  941476 command_runner.go:130] >       "uid":  {
	I1213 10:29:44.812475  941476 command_runner.go:130] >         "value":  "65535"
	I1213 10:29:44.812478  941476 command_runner.go:130] >       },
	I1213 10:29:44.812482  941476 command_runner.go:130] >       "username":  "",
	I1213 10:29:44.812485  941476 command_runner.go:130] >       "pinned":  true
	I1213 10:29:44.812488  941476 command_runner.go:130] >     }
	I1213 10:29:44.812491  941476 command_runner.go:130] >   ]
	I1213 10:29:44.812494  941476 command_runner.go:130] > }
	I1213 10:29:44.812656  941476 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 10:29:44.812664  941476 crio.go:433] Images already preloaded, skipping extraction
	I1213 10:29:44.812720  941476 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 10:29:44.834840  941476 command_runner.go:130] > {
	I1213 10:29:44.834859  941476 command_runner.go:130] >   "images":  [
	I1213 10:29:44.834863  941476 command_runner.go:130] >     {
	I1213 10:29:44.834871  941476 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1213 10:29:44.834878  941476 command_runner.go:130] >       "repoTags":  [
	I1213 10:29:44.834893  941476 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1213 10:29:44.834897  941476 command_runner.go:130] >       ],
	I1213 10:29:44.834903  941476 command_runner.go:130] >       "repoDigests":  [
	I1213 10:29:44.834913  941476 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1213 10:29:44.834921  941476 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1213 10:29:44.834924  941476 command_runner.go:130] >       ],
	I1213 10:29:44.834928  941476 command_runner.go:130] >       "size":  "111333938",
	I1213 10:29:44.834932  941476 command_runner.go:130] >       "username":  "",
	I1213 10:29:44.834941  941476 command_runner.go:130] >       "pinned":  false
	I1213 10:29:44.834944  941476 command_runner.go:130] >     },
	I1213 10:29:44.834947  941476 command_runner.go:130] >     {
	I1213 10:29:44.834953  941476 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1213 10:29:44.834957  941476 command_runner.go:130] >       "repoTags":  [
	I1213 10:29:44.834962  941476 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1213 10:29:44.834965  941476 command_runner.go:130] >       ],
	I1213 10:29:44.834969  941476 command_runner.go:130] >       "repoDigests":  [
	I1213 10:29:44.834977  941476 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1213 10:29:44.834986  941476 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1213 10:29:44.834989  941476 command_runner.go:130] >       ],
	I1213 10:29:44.834993  941476 command_runner.go:130] >       "size":  "29037500",
	I1213 10:29:44.834997  941476 command_runner.go:130] >       "username":  "",
	I1213 10:29:44.835006  941476 command_runner.go:130] >       "pinned":  false
	I1213 10:29:44.835009  941476 command_runner.go:130] >     },
	I1213 10:29:44.835013  941476 command_runner.go:130] >     {
	I1213 10:29:44.835019  941476 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1213 10:29:44.835023  941476 command_runner.go:130] >       "repoTags":  [
	I1213 10:29:44.835028  941476 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1213 10:29:44.835032  941476 command_runner.go:130] >       ],
	I1213 10:29:44.835036  941476 command_runner.go:130] >       "repoDigests":  [
	I1213 10:29:44.835044  941476 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1213 10:29:44.835052  941476 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1213 10:29:44.835055  941476 command_runner.go:130] >       ],
	I1213 10:29:44.835058  941476 command_runner.go:130] >       "size":  "74491780",
	I1213 10:29:44.835062  941476 command_runner.go:130] >       "username":  "nonroot",
	I1213 10:29:44.835066  941476 command_runner.go:130] >       "pinned":  false
	I1213 10:29:44.835069  941476 command_runner.go:130] >     },
	I1213 10:29:44.835073  941476 command_runner.go:130] >     {
	I1213 10:29:44.835080  941476 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1213 10:29:44.835083  941476 command_runner.go:130] >       "repoTags":  [
	I1213 10:29:44.835088  941476 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1213 10:29:44.835093  941476 command_runner.go:130] >       ],
	I1213 10:29:44.835100  941476 command_runner.go:130] >       "repoDigests":  [
	I1213 10:29:44.835108  941476 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1213 10:29:44.835116  941476 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1213 10:29:44.835119  941476 command_runner.go:130] >       ],
	I1213 10:29:44.835123  941476 command_runner.go:130] >       "size":  "60857170",
	I1213 10:29:44.835127  941476 command_runner.go:130] >       "uid":  {
	I1213 10:29:44.835131  941476 command_runner.go:130] >         "value":  "0"
	I1213 10:29:44.835134  941476 command_runner.go:130] >       },
	I1213 10:29:44.835147  941476 command_runner.go:130] >       "username":  "",
	I1213 10:29:44.835151  941476 command_runner.go:130] >       "pinned":  false
	I1213 10:29:44.835154  941476 command_runner.go:130] >     },
	I1213 10:29:44.835157  941476 command_runner.go:130] >     {
	I1213 10:29:44.835163  941476 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1213 10:29:44.835167  941476 command_runner.go:130] >       "repoTags":  [
	I1213 10:29:44.835172  941476 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1213 10:29:44.835175  941476 command_runner.go:130] >       ],
	I1213 10:29:44.835179  941476 command_runner.go:130] >       "repoDigests":  [
	I1213 10:29:44.835187  941476 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1213 10:29:44.835195  941476 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1213 10:29:44.835197  941476 command_runner.go:130] >       ],
	I1213 10:29:44.835201  941476 command_runner.go:130] >       "size":  "84949999",
	I1213 10:29:44.835205  941476 command_runner.go:130] >       "uid":  {
	I1213 10:29:44.835209  941476 command_runner.go:130] >         "value":  "0"
	I1213 10:29:44.835212  941476 command_runner.go:130] >       },
	I1213 10:29:44.835215  941476 command_runner.go:130] >       "username":  "",
	I1213 10:29:44.835219  941476 command_runner.go:130] >       "pinned":  false
	I1213 10:29:44.835222  941476 command_runner.go:130] >     },
	I1213 10:29:44.835224  941476 command_runner.go:130] >     {
	I1213 10:29:44.835231  941476 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1213 10:29:44.835234  941476 command_runner.go:130] >       "repoTags":  [
	I1213 10:29:44.835240  941476 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1213 10:29:44.835243  941476 command_runner.go:130] >       ],
	I1213 10:29:44.835247  941476 command_runner.go:130] >       "repoDigests":  [
	I1213 10:29:44.835261  941476 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1213 10:29:44.835270  941476 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1213 10:29:44.835273  941476 command_runner.go:130] >       ],
	I1213 10:29:44.835277  941476 command_runner.go:130] >       "size":  "72170325",
	I1213 10:29:44.835281  941476 command_runner.go:130] >       "uid":  {
	I1213 10:29:44.835285  941476 command_runner.go:130] >         "value":  "0"
	I1213 10:29:44.835288  941476 command_runner.go:130] >       },
	I1213 10:29:44.835292  941476 command_runner.go:130] >       "username":  "",
	I1213 10:29:44.835295  941476 command_runner.go:130] >       "pinned":  false
	I1213 10:29:44.835298  941476 command_runner.go:130] >     },
	I1213 10:29:44.835302  941476 command_runner.go:130] >     {
	I1213 10:29:44.835309  941476 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1213 10:29:44.835312  941476 command_runner.go:130] >       "repoTags":  [
	I1213 10:29:44.835318  941476 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1213 10:29:44.835320  941476 command_runner.go:130] >       ],
	I1213 10:29:44.835324  941476 command_runner.go:130] >       "repoDigests":  [
	I1213 10:29:44.835332  941476 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1213 10:29:44.835340  941476 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1213 10:29:44.835343  941476 command_runner.go:130] >       ],
	I1213 10:29:44.835347  941476 command_runner.go:130] >       "size":  "74106775",
	I1213 10:29:44.835351  941476 command_runner.go:130] >       "username":  "",
	I1213 10:29:44.835355  941476 command_runner.go:130] >       "pinned":  false
	I1213 10:29:44.835358  941476 command_runner.go:130] >     },
	I1213 10:29:44.835361  941476 command_runner.go:130] >     {
	I1213 10:29:44.835367  941476 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1213 10:29:44.835371  941476 command_runner.go:130] >       "repoTags":  [
	I1213 10:29:44.835376  941476 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1213 10:29:44.835379  941476 command_runner.go:130] >       ],
	I1213 10:29:44.835383  941476 command_runner.go:130] >       "repoDigests":  [
	I1213 10:29:44.835390  941476 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1213 10:29:44.835407  941476 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1213 10:29:44.835411  941476 command_runner.go:130] >       ],
	I1213 10:29:44.835415  941476 command_runner.go:130] >       "size":  "49822549",
	I1213 10:29:44.835422  941476 command_runner.go:130] >       "uid":  {
	I1213 10:29:44.835426  941476 command_runner.go:130] >         "value":  "0"
	I1213 10:29:44.835429  941476 command_runner.go:130] >       },
	I1213 10:29:44.835433  941476 command_runner.go:130] >       "username":  "",
	I1213 10:29:44.835436  941476 command_runner.go:130] >       "pinned":  false
	I1213 10:29:44.835439  941476 command_runner.go:130] >     },
	I1213 10:29:44.835442  941476 command_runner.go:130] >     {
	I1213 10:29:44.835449  941476 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1213 10:29:44.835452  941476 command_runner.go:130] >       "repoTags":  [
	I1213 10:29:44.835457  941476 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1213 10:29:44.835460  941476 command_runner.go:130] >       ],
	I1213 10:29:44.835463  941476 command_runner.go:130] >       "repoDigests":  [
	I1213 10:29:44.835470  941476 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1213 10:29:44.835478  941476 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1213 10:29:44.835481  941476 command_runner.go:130] >       ],
	I1213 10:29:44.835485  941476 command_runner.go:130] >       "size":  "519884",
	I1213 10:29:44.835489  941476 command_runner.go:130] >       "uid":  {
	I1213 10:29:44.835492  941476 command_runner.go:130] >         "value":  "65535"
	I1213 10:29:44.835495  941476 command_runner.go:130] >       },
	I1213 10:29:44.835499  941476 command_runner.go:130] >       "username":  "",
	I1213 10:29:44.835503  941476 command_runner.go:130] >       "pinned":  true
	I1213 10:29:44.835506  941476 command_runner.go:130] >     }
	I1213 10:29:44.835508  941476 command_runner.go:130] >   ]
	I1213 10:29:44.835512  941476 command_runner.go:130] > }
	I1213 10:29:44.838144  941476 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 10:29:44.838206  941476 cache_images.go:86] Images are preloaded, skipping loading
	I1213 10:29:44.838219  941476 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1213 10:29:44.838324  941476 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-200955 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-200955 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 10:29:44.838426  941476 ssh_runner.go:195] Run: crio config
	I1213 10:29:44.886075  941476 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1213 10:29:44.886098  941476 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1213 10:29:44.886106  941476 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1213 10:29:44.886110  941476 command_runner.go:130] > #
	I1213 10:29:44.886117  941476 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1213 10:29:44.886124  941476 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1213 10:29:44.886130  941476 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1213 10:29:44.886139  941476 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1213 10:29:44.886142  941476 command_runner.go:130] > # reload'.
	I1213 10:29:44.886162  941476 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1213 10:29:44.886169  941476 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1213 10:29:44.886175  941476 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1213 10:29:44.886181  941476 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1213 10:29:44.886184  941476 command_runner.go:130] > [crio]
	I1213 10:29:44.886190  941476 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1213 10:29:44.886195  941476 command_runner.go:130] > # containers images, in this directory.
	I1213 10:29:44.886932  941476 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1213 10:29:44.886948  941476 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1213 10:29:44.887520  941476 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1213 10:29:44.887536  941476 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1213 10:29:44.887990  941476 command_runner.go:130] > # imagestore = ""
	I1213 10:29:44.888002  941476 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1213 10:29:44.888019  941476 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1213 10:29:44.888390  941476 command_runner.go:130] > # storage_driver = "overlay"
	I1213 10:29:44.888402  941476 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1213 10:29:44.888409  941476 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1213 10:29:44.888578  941476 command_runner.go:130] > # storage_option = [
	I1213 10:29:44.888743  941476 command_runner.go:130] > # ]
	I1213 10:29:44.888754  941476 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1213 10:29:44.888761  941476 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1213 10:29:44.888765  941476 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1213 10:29:44.888771  941476 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1213 10:29:44.888787  941476 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1213 10:29:44.888792  941476 command_runner.go:130] > # always happen on a node reboot
	I1213 10:29:44.888797  941476 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1213 10:29:44.888807  941476 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1213 10:29:44.888813  941476 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1213 10:29:44.888818  941476 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1213 10:29:44.888822  941476 command_runner.go:130] > # version_file_persist = ""
	I1213 10:29:44.888829  941476 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1213 10:29:44.888839  941476 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1213 10:29:44.888843  941476 command_runner.go:130] > # internal_wipe = true
	I1213 10:29:44.888851  941476 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1213 10:29:44.888856  941476 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1213 10:29:44.888860  941476 command_runner.go:130] > # internal_repair = true
	I1213 10:29:44.888869  941476 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1213 10:29:44.888875  941476 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1213 10:29:44.888881  941476 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1213 10:29:44.888886  941476 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1213 10:29:44.888892  941476 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1213 10:29:44.888895  941476 command_runner.go:130] > [crio.api]
	I1213 10:29:44.888901  941476 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1213 10:29:44.888905  941476 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1213 10:29:44.888910  941476 command_runner.go:130] > # IP address on which the stream server will listen.
	I1213 10:29:44.888914  941476 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1213 10:29:44.888921  941476 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1213 10:29:44.888926  941476 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1213 10:29:44.888929  941476 command_runner.go:130] > # stream_port = "0"
	I1213 10:29:44.888934  941476 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1213 10:29:44.888938  941476 command_runner.go:130] > # stream_enable_tls = false
	I1213 10:29:44.888944  941476 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1213 10:29:44.889110  941476 command_runner.go:130] > # stream_idle_timeout = ""
	I1213 10:29:44.889121  941476 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1213 10:29:44.889127  941476 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1213 10:29:44.889131  941476 command_runner.go:130] > # stream_tls_cert = ""
	I1213 10:29:44.889137  941476 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1213 10:29:44.889143  941476 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1213 10:29:44.889156  941476 command_runner.go:130] > # stream_tls_key = ""
	I1213 10:29:44.889162  941476 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1213 10:29:44.889169  941476 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1213 10:29:44.889177  941476 command_runner.go:130] > # automatically pick up the changes.
	I1213 10:29:44.889181  941476 command_runner.go:130] > # stream_tls_ca = ""
	I1213 10:29:44.889197  941476 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1213 10:29:44.889202  941476 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1213 10:29:44.889209  941476 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1213 10:29:44.889214  941476 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1213 10:29:44.889220  941476 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1213 10:29:44.889225  941476 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1213 10:29:44.889229  941476 command_runner.go:130] > [crio.runtime]
	I1213 10:29:44.889235  941476 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1213 10:29:44.889240  941476 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1213 10:29:44.889244  941476 command_runner.go:130] > # "nofile=1024:2048"
	I1213 10:29:44.889253  941476 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1213 10:29:44.889257  941476 command_runner.go:130] > # default_ulimits = [
	I1213 10:29:44.889260  941476 command_runner.go:130] > # ]
	I1213 10:29:44.889265  941476 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1213 10:29:44.889269  941476 command_runner.go:130] > # no_pivot = false
	I1213 10:29:44.889274  941476 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1213 10:29:44.889280  941476 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1213 10:29:44.889285  941476 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1213 10:29:44.889291  941476 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1213 10:29:44.889296  941476 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1213 10:29:44.889318  941476 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1213 10:29:44.889322  941476 command_runner.go:130] > # conmon = ""
	I1213 10:29:44.889327  941476 command_runner.go:130] > # Cgroup setting for conmon
	I1213 10:29:44.889333  941476 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1213 10:29:44.889512  941476 command_runner.go:130] > conmon_cgroup = "pod"
	I1213 10:29:44.889563  941476 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1213 10:29:44.889585  941476 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1213 10:29:44.889610  941476 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1213 10:29:44.889647  941476 command_runner.go:130] > # conmon_env = [
	I1213 10:29:44.889671  941476 command_runner.go:130] > # ]
	I1213 10:29:44.889696  941476 command_runner.go:130] > # Additional environment variables to set for all the
	I1213 10:29:44.889721  941476 command_runner.go:130] > # containers. These are overridden if set in the
	I1213 10:29:44.889753  941476 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1213 10:29:44.889776  941476 command_runner.go:130] > # default_env = [
	I1213 10:29:44.889797  941476 command_runner.go:130] > # ]
	I1213 10:29:44.889822  941476 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1213 10:29:44.889858  941476 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1213 10:29:44.889885  941476 command_runner.go:130] > # selinux = false
	I1213 10:29:44.889906  941476 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1213 10:29:44.889932  941476 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1213 10:29:44.889962  941476 command_runner.go:130] > # This option supports live configuration reload.
	I1213 10:29:44.889985  941476 command_runner.go:130] > # seccomp_profile = ""
	I1213 10:29:44.890009  941476 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1213 10:29:44.890029  941476 command_runner.go:130] > # This option supports live configuration reload.
	I1213 10:29:44.890061  941476 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1213 10:29:44.890087  941476 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1213 10:29:44.890109  941476 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1213 10:29:44.890133  941476 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1213 10:29:44.890166  941476 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1213 10:29:44.890191  941476 command_runner.go:130] > # This option supports live configuration reload.
	I1213 10:29:44.890212  941476 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1213 10:29:44.890236  941476 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1213 10:29:44.890284  941476 command_runner.go:130] > # the cgroup blockio controller.
	I1213 10:29:44.890307  941476 command_runner.go:130] > # blockio_config_file = ""
	I1213 10:29:44.890329  941476 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1213 10:29:44.890350  941476 command_runner.go:130] > # blockio parameters.
	I1213 10:29:44.890409  941476 command_runner.go:130] > # blockio_reload = false
	I1213 10:29:44.890437  941476 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1213 10:29:44.890458  941476 command_runner.go:130] > # irqbalance daemon.
	I1213 10:29:44.890483  941476 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1213 10:29:44.890515  941476 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1213 10:29:44.890551  941476 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1213 10:29:44.890575  941476 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1213 10:29:44.890599  941476 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1213 10:29:44.890631  941476 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1213 10:29:44.890655  941476 command_runner.go:130] > # This option supports live configuration reload.
	I1213 10:29:44.890676  941476 command_runner.go:130] > # rdt_config_file = ""
	I1213 10:29:44.890716  941476 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1213 10:29:44.890743  941476 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1213 10:29:44.890767  941476 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1213 10:29:44.890788  941476 command_runner.go:130] > # separate_pull_cgroup = ""
	I1213 10:29:44.890824  941476 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1213 10:29:44.890863  941476 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1213 10:29:44.890886  941476 command_runner.go:130] > # will be added.
	I1213 10:29:44.890904  941476 command_runner.go:130] > # default_capabilities = [
	I1213 10:29:44.890932  941476 command_runner.go:130] > # 	"CHOWN",
	I1213 10:29:44.890957  941476 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1213 10:29:44.891256  941476 command_runner.go:130] > # 	"FSETID",
	I1213 10:29:44.891291  941476 command_runner.go:130] > # 	"FOWNER",
	I1213 10:29:44.891318  941476 command_runner.go:130] > # 	"SETGID",
	I1213 10:29:44.891335  941476 command_runner.go:130] > # 	"SETUID",
	I1213 10:29:44.891390  941476 command_runner.go:130] > # 	"SETPCAP",
	I1213 10:29:44.891416  941476 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1213 10:29:44.891438  941476 command_runner.go:130] > # 	"KILL",
	I1213 10:29:44.891461  941476 command_runner.go:130] > # ]
	I1213 10:29:44.891498  941476 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1213 10:29:44.891527  941476 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1213 10:29:44.891550  941476 command_runner.go:130] > # add_inheritable_capabilities = false
	I1213 10:29:44.891572  941476 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1213 10:29:44.891606  941476 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1213 10:29:44.891629  941476 command_runner.go:130] > default_sysctls = [
	I1213 10:29:44.891651  941476 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1213 10:29:44.891671  941476 command_runner.go:130] > ]
	I1213 10:29:44.891705  941476 command_runner.go:130] > # List of devices on the host that a
	I1213 10:29:44.891730  941476 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1213 10:29:44.891749  941476 command_runner.go:130] > # allowed_devices = [
	I1213 10:29:44.891779  941476 command_runner.go:130] > # 	"/dev/fuse",
	I1213 10:29:44.891809  941476 command_runner.go:130] > # 	"/dev/net/tun",
	I1213 10:29:44.891834  941476 command_runner.go:130] > # ]
	I1213 10:29:44.891856  941476 command_runner.go:130] > # List of additional devices. specified as
	I1213 10:29:44.891880  941476 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1213 10:29:44.891914  941476 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1213 10:29:44.891940  941476 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1213 10:29:44.891962  941476 command_runner.go:130] > # additional_devices = [
	I1213 10:29:44.891983  941476 command_runner.go:130] > # ]
	I1213 10:29:44.892017  941476 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1213 10:29:44.892041  941476 command_runner.go:130] > # cdi_spec_dirs = [
	I1213 10:29:44.892063  941476 command_runner.go:130] > # 	"/etc/cdi",
	I1213 10:29:44.892082  941476 command_runner.go:130] > # 	"/var/run/cdi",
	I1213 10:29:44.892103  941476 command_runner.go:130] > # ]
	I1213 10:29:44.892139  941476 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1213 10:29:44.892161  941476 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1213 10:29:44.892183  941476 command_runner.go:130] > # Defaults to false.
	I1213 10:29:44.892215  941476 command_runner.go:130] > # device_ownership_from_security_context = false
	I1213 10:29:44.892243  941476 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1213 10:29:44.892267  941476 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1213 10:29:44.892287  941476 command_runner.go:130] > # hooks_dir = [
	I1213 10:29:44.892324  941476 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1213 10:29:44.892349  941476 command_runner.go:130] > # ]
	I1213 10:29:44.892371  941476 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1213 10:29:44.892394  941476 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1213 10:29:44.892427  941476 command_runner.go:130] > # its default mounts from the following two files:
	I1213 10:29:44.892450  941476 command_runner.go:130] > #
	I1213 10:29:44.892472  941476 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1213 10:29:44.892496  941476 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1213 10:29:44.892529  941476 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1213 10:29:44.892555  941476 command_runner.go:130] > #
	I1213 10:29:44.892582  941476 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1213 10:29:44.892608  941476 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1213 10:29:44.892654  941476 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1213 10:29:44.892680  941476 command_runner.go:130] > #      only add mounts it finds in this file.
	I1213 10:29:44.892700  941476 command_runner.go:130] > #
	I1213 10:29:44.892722  941476 command_runner.go:130] > # default_mounts_file = ""
	I1213 10:29:44.892742  941476 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1213 10:29:44.892779  941476 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1213 10:29:44.892797  941476 command_runner.go:130] > # pids_limit = -1
	I1213 10:29:44.892825  941476 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1213 10:29:44.892860  941476 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1213 10:29:44.892886  941476 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1213 10:29:44.892912  941476 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1213 10:29:44.892937  941476 command_runner.go:130] > # log_size_max = -1
	I1213 10:29:44.892967  941476 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1213 10:29:44.892992  941476 command_runner.go:130] > # log_to_journald = false
	I1213 10:29:44.893016  941476 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1213 10:29:44.893040  941476 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1213 10:29:44.893073  941476 command_runner.go:130] > # Path to directory for container attach sockets.
	I1213 10:29:44.893097  941476 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1213 10:29:44.893118  941476 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1213 10:29:44.893142  941476 command_runner.go:130] > # bind_mount_prefix = ""
	I1213 10:29:44.893174  941476 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1213 10:29:44.893198  941476 command_runner.go:130] > # read_only = false
	I1213 10:29:44.893223  941476 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1213 10:29:44.893245  941476 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1213 10:29:44.893278  941476 command_runner.go:130] > # live configuration reload.
	I1213 10:29:44.893302  941476 command_runner.go:130] > # log_level = "info"
	I1213 10:29:44.893331  941476 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1213 10:29:44.893353  941476 command_runner.go:130] > # This option supports live configuration reload.
	I1213 10:29:44.893380  941476 command_runner.go:130] > # log_filter = ""
	I1213 10:29:44.893406  941476 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1213 10:29:44.893430  941476 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1213 10:29:44.893452  941476 command_runner.go:130] > # separated by comma.
	I1213 10:29:44.893486  941476 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1213 10:29:44.893520  941476 command_runner.go:130] > # uid_mappings = ""
	I1213 10:29:44.893564  941476 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1213 10:29:44.893593  941476 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1213 10:29:44.893617  941476 command_runner.go:130] > # separated by comma.
	I1213 10:29:44.893643  941476 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1213 10:29:44.893997  941476 command_runner.go:130] > # gid_mappings = ""
	I1213 10:29:44.894010  941476 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1213 10:29:44.894017  941476 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1213 10:29:44.894024  941476 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1213 10:29:44.894032  941476 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1213 10:29:44.894037  941476 command_runner.go:130] > # minimum_mappable_uid = -1
	I1213 10:29:44.894043  941476 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1213 10:29:44.894050  941476 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1213 10:29:44.894056  941476 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1213 10:29:44.894064  941476 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1213 10:29:44.894068  941476 command_runner.go:130] > # minimum_mappable_gid = -1
	I1213 10:29:44.894074  941476 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1213 10:29:44.894081  941476 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1213 10:29:44.894086  941476 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1213 10:29:44.894090  941476 command_runner.go:130] > # ctr_stop_timeout = 30
	I1213 10:29:44.894096  941476 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1213 10:29:44.894102  941476 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1213 10:29:44.894107  941476 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1213 10:29:44.894111  941476 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1213 10:29:44.894115  941476 command_runner.go:130] > # drop_infra_ctr = true
	I1213 10:29:44.894121  941476 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1213 10:29:44.894127  941476 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1213 10:29:44.894135  941476 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1213 10:29:44.894141  941476 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1213 10:29:44.894149  941476 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1213 10:29:44.894155  941476 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1213 10:29:44.894160  941476 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1213 10:29:44.894165  941476 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1213 10:29:44.894173  941476 command_runner.go:130] > # shared_cpuset = ""
	I1213 10:29:44.894179  941476 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1213 10:29:44.894184  941476 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1213 10:29:44.894188  941476 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1213 10:29:44.894195  941476 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1213 10:29:44.894199  941476 command_runner.go:130] > # pinns_path = ""
	I1213 10:29:44.894204  941476 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1213 10:29:44.894210  941476 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1213 10:29:44.894216  941476 command_runner.go:130] > # enable_criu_support = true
	I1213 10:29:44.894223  941476 command_runner.go:130] > # Enable/disable the generation of the container,
	I1213 10:29:44.894229  941476 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1213 10:29:44.894234  941476 command_runner.go:130] > # enable_pod_events = false
	I1213 10:29:44.894240  941476 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1213 10:29:44.894245  941476 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1213 10:29:44.894249  941476 command_runner.go:130] > # default_runtime = "crun"
	I1213 10:29:44.894254  941476 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1213 10:29:44.894261  941476 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1213 10:29:44.894271  941476 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1213 10:29:44.894276  941476 command_runner.go:130] > # creation as a file is not desired either.
	I1213 10:29:44.894284  941476 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1213 10:29:44.894289  941476 command_runner.go:130] > # the hostname is being managed dynamically.
	I1213 10:29:44.894293  941476 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1213 10:29:44.894297  941476 command_runner.go:130] > # ]
	I1213 10:29:44.894303  941476 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1213 10:29:44.894309  941476 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1213 10:29:44.894316  941476 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1213 10:29:44.894321  941476 command_runner.go:130] > # Each entry in the table should follow the format:
	I1213 10:29:44.894324  941476 command_runner.go:130] > #
	I1213 10:29:44.894329  941476 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1213 10:29:44.894333  941476 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1213 10:29:44.894337  941476 command_runner.go:130] > # runtime_type = "oci"
	I1213 10:29:44.894342  941476 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1213 10:29:44.894348  941476 command_runner.go:130] > # inherit_default_runtime = false
	I1213 10:29:44.894367  941476 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1213 10:29:44.894372  941476 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1213 10:29:44.894377  941476 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1213 10:29:44.894381  941476 command_runner.go:130] > # monitor_env = []
	I1213 10:29:44.894386  941476 command_runner.go:130] > # privileged_without_host_devices = false
	I1213 10:29:44.894390  941476 command_runner.go:130] > # allowed_annotations = []
	I1213 10:29:44.894395  941476 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1213 10:29:44.894399  941476 command_runner.go:130] > # no_sync_log = false
	I1213 10:29:44.894403  941476 command_runner.go:130] > # default_annotations = {}
	I1213 10:29:44.894407  941476 command_runner.go:130] > # stream_websockets = false
	I1213 10:29:44.894411  941476 command_runner.go:130] > # seccomp_profile = ""
	I1213 10:29:44.894442  941476 command_runner.go:130] > # Where:
	I1213 10:29:44.894448  941476 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1213 10:29:44.894454  941476 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1213 10:29:44.894461  941476 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1213 10:29:44.894468  941476 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1213 10:29:44.894471  941476 command_runner.go:130] > #   in $PATH.
	I1213 10:29:44.894478  941476 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1213 10:29:44.894482  941476 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1213 10:29:44.894488  941476 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1213 10:29:44.894492  941476 command_runner.go:130] > #   state.
	I1213 10:29:44.894498  941476 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1213 10:29:44.894504  941476 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1213 10:29:44.894510  941476 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1213 10:29:44.894516  941476 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1213 10:29:44.894521  941476 command_runner.go:130] > #   the values from the default runtime on load time.
	I1213 10:29:44.894527  941476 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1213 10:29:44.894533  941476 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1213 10:29:44.894539  941476 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1213 10:29:44.894545  941476 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1213 10:29:44.894550  941476 command_runner.go:130] > #   The currently recognized values are:
	I1213 10:29:44.894557  941476 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1213 10:29:44.894564  941476 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1213 10:29:44.894574  941476 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1213 10:29:44.894580  941476 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1213 10:29:44.894588  941476 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1213 10:29:44.894596  941476 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1213 10:29:44.894602  941476 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1213 10:29:44.894608  941476 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1213 10:29:44.894614  941476 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1213 10:29:44.894621  941476 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1213 10:29:44.894628  941476 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1213 10:29:44.894634  941476 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1213 10:29:44.894640  941476 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1213 10:29:44.894646  941476 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1213 10:29:44.894652  941476 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1213 10:29:44.894661  941476 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1213 10:29:44.894667  941476 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1213 10:29:44.894672  941476 command_runner.go:130] > #   deprecated option "conmon".
	I1213 10:29:44.894679  941476 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1213 10:29:44.894684  941476 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1213 10:29:44.894691  941476 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1213 10:29:44.894695  941476 command_runner.go:130] > #   should be moved to the container's cgroup
	I1213 10:29:44.894702  941476 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1213 10:29:44.894707  941476 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1213 10:29:44.894714  941476 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1213 10:29:44.894718  941476 command_runner.go:130] > #   conmon-rs by using:
	I1213 10:29:44.894726  941476 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1213 10:29:44.894734  941476 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1213 10:29:44.894742  941476 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1213 10:29:44.894748  941476 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1213 10:29:44.894753  941476 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1213 10:29:44.894760  941476 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1213 10:29:44.894768  941476 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1213 10:29:44.894774  941476 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1213 10:29:44.894782  941476 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1213 10:29:44.894794  941476 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1213 10:29:44.894798  941476 command_runner.go:130] > #   when a machine crash happens.
	I1213 10:29:44.894805  941476 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1213 10:29:44.894813  941476 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1213 10:29:44.894821  941476 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1213 10:29:44.894825  941476 command_runner.go:130] > #   seccomp profile for the runtime.
	I1213 10:29:44.894838  941476 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1213 10:29:44.894848  941476 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1213 10:29:44.894851  941476 command_runner.go:130] > #
	I1213 10:29:44.894855  941476 command_runner.go:130] > # Using the seccomp notifier feature:
	I1213 10:29:44.894859  941476 command_runner.go:130] > #
	I1213 10:29:44.894866  941476 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1213 10:29:44.894872  941476 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1213 10:29:44.894878  941476 command_runner.go:130] > #
	I1213 10:29:44.894887  941476 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1213 10:29:44.894893  941476 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1213 10:29:44.894896  941476 command_runner.go:130] > #
	I1213 10:29:44.894903  941476 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1213 10:29:44.894906  941476 command_runner.go:130] > # feature.
	I1213 10:29:44.894909  941476 command_runner.go:130] > #
	I1213 10:29:44.894914  941476 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1213 10:29:44.894921  941476 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1213 10:29:44.894927  941476 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1213 10:29:44.894933  941476 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1213 10:29:44.894939  941476 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1213 10:29:44.894942  941476 command_runner.go:130] > #
	I1213 10:29:44.894948  941476 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1213 10:29:44.894954  941476 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1213 10:29:44.894957  941476 command_runner.go:130] > #
	I1213 10:29:44.894963  941476 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1213 10:29:44.894968  941476 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1213 10:29:44.894971  941476 command_runner.go:130] > #
	I1213 10:29:44.894977  941476 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1213 10:29:44.894987  941476 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1213 10:29:44.894991  941476 command_runner.go:130] > # limitation.
	I1213 10:29:44.894995  941476 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1213 10:29:44.895000  941476 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1213 10:29:44.895004  941476 command_runner.go:130] > runtime_type = ""
	I1213 10:29:44.895008  941476 command_runner.go:130] > runtime_root = "/run/crun"
	I1213 10:29:44.895013  941476 command_runner.go:130] > inherit_default_runtime = false
	I1213 10:29:44.895016  941476 command_runner.go:130] > runtime_config_path = ""
	I1213 10:29:44.895020  941476 command_runner.go:130] > container_min_memory = ""
	I1213 10:29:44.895025  941476 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1213 10:29:44.895028  941476 command_runner.go:130] > monitor_cgroup = "pod"
	I1213 10:29:44.895032  941476 command_runner.go:130] > monitor_exec_cgroup = ""
	I1213 10:29:44.895036  941476 command_runner.go:130] > allowed_annotations = [
	I1213 10:29:44.895040  941476 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1213 10:29:44.895043  941476 command_runner.go:130] > ]
	I1213 10:29:44.895047  941476 command_runner.go:130] > privileged_without_host_devices = false
	I1213 10:29:44.895051  941476 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1213 10:29:44.895056  941476 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1213 10:29:44.895059  941476 command_runner.go:130] > runtime_type = ""
	I1213 10:29:44.895064  941476 command_runner.go:130] > runtime_root = "/run/runc"
	I1213 10:29:44.895069  941476 command_runner.go:130] > inherit_default_runtime = false
	I1213 10:29:44.895072  941476 command_runner.go:130] > runtime_config_path = ""
	I1213 10:29:44.895076  941476 command_runner.go:130] > container_min_memory = ""
	I1213 10:29:44.895081  941476 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1213 10:29:44.895084  941476 command_runner.go:130] > monitor_cgroup = "pod"
	I1213 10:29:44.895089  941476 command_runner.go:130] > monitor_exec_cgroup = ""
	I1213 10:29:44.895093  941476 command_runner.go:130] > privileged_without_host_devices = false
	I1213 10:29:44.895100  941476 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1213 10:29:44.895105  941476 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1213 10:29:44.895111  941476 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1213 10:29:44.895119  941476 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1213 10:29:44.895129  941476 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1213 10:29:44.895139  941476 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1213 10:29:44.895151  941476 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1213 10:29:44.895156  941476 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1213 10:29:44.895166  941476 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1213 10:29:44.895174  941476 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1213 10:29:44.895181  941476 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1213 10:29:44.895188  941476 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1213 10:29:44.895191  941476 command_runner.go:130] > # Example:
	I1213 10:29:44.895196  941476 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1213 10:29:44.895201  941476 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1213 10:29:44.895207  941476 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1213 10:29:44.895212  941476 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1213 10:29:44.895216  941476 command_runner.go:130] > # cpuset = "0-1"
	I1213 10:29:44.895219  941476 command_runner.go:130] > # cpushares = "5"
	I1213 10:29:44.895223  941476 command_runner.go:130] > # cpuquota = "1000"
	I1213 10:29:44.895227  941476 command_runner.go:130] > # cpuperiod = "100000"
	I1213 10:29:44.895230  941476 command_runner.go:130] > # cpulimit = "35"
	I1213 10:29:44.895234  941476 command_runner.go:130] > # Where:
	I1213 10:29:44.895238  941476 command_runner.go:130] > # The workload name is workload-type.
	I1213 10:29:44.895245  941476 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1213 10:29:44.895250  941476 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1213 10:29:44.895259  941476 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1213 10:29:44.895267  941476 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1213 10:29:44.895274  941476 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1213 10:29:44.895279  941476 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1213 10:29:44.895286  941476 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1213 10:29:44.895290  941476 command_runner.go:130] > # Default value is set to true
	I1213 10:29:44.895294  941476 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1213 10:29:44.895300  941476 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1213 10:29:44.895305  941476 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1213 10:29:44.895309  941476 command_runner.go:130] > # Default value is set to 'false'
	I1213 10:29:44.895313  941476 command_runner.go:130] > # disable_hostport_mapping = false
	I1213 10:29:44.895318  941476 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1213 10:29:44.895326  941476 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1213 10:29:44.895334  941476 command_runner.go:130] > # timezone = ""
	I1213 10:29:44.895341  941476 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1213 10:29:44.895343  941476 command_runner.go:130] > #
	I1213 10:29:44.895349  941476 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1213 10:29:44.895355  941476 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1213 10:29:44.895358  941476 command_runner.go:130] > [crio.image]
	I1213 10:29:44.895364  941476 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1213 10:29:44.895368  941476 command_runner.go:130] > # default_transport = "docker://"
	I1213 10:29:44.895373  941476 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1213 10:29:44.895380  941476 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1213 10:29:44.895383  941476 command_runner.go:130] > # global_auth_file = ""
	I1213 10:29:44.895388  941476 command_runner.go:130] > # The image used to instantiate infra containers.
	I1213 10:29:44.895393  941476 command_runner.go:130] > # This option supports live configuration reload.
	I1213 10:29:44.895398  941476 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1213 10:29:44.895404  941476 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1213 10:29:44.895412  941476 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1213 10:29:44.895417  941476 command_runner.go:130] > # This option supports live configuration reload.
	I1213 10:29:44.895420  941476 command_runner.go:130] > # pause_image_auth_file = ""
	I1213 10:29:44.895426  941476 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1213 10:29:44.895432  941476 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1213 10:29:44.895438  941476 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1213 10:29:44.895444  941476 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1213 10:29:44.895448  941476 command_runner.go:130] > # pause_command = "/pause"
	I1213 10:29:44.895454  941476 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1213 10:29:44.895460  941476 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1213 10:29:44.895467  941476 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1213 10:29:44.895473  941476 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1213 10:29:44.895479  941476 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1213 10:29:44.895485  941476 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1213 10:29:44.895488  941476 command_runner.go:130] > # pinned_images = [
	I1213 10:29:44.895491  941476 command_runner.go:130] > # ]
	I1213 10:29:44.895497  941476 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1213 10:29:44.895503  941476 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1213 10:29:44.895512  941476 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1213 10:29:44.895519  941476 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1213 10:29:44.895524  941476 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1213 10:29:44.895529  941476 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1213 10:29:44.895534  941476 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1213 10:29:44.895540  941476 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1213 10:29:44.895547  941476 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1213 10:29:44.895554  941476 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1213 10:29:44.895559  941476 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1213 10:29:44.895564  941476 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1213 10:29:44.895570  941476 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1213 10:29:44.895576  941476 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1213 10:29:44.895580  941476 command_runner.go:130] > # changing them here.
	I1213 10:29:44.895586  941476 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1213 10:29:44.895590  941476 command_runner.go:130] > # insecure_registries = [
	I1213 10:29:44.895592  941476 command_runner.go:130] > # ]
	I1213 10:29:44.895598  941476 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1213 10:29:44.895603  941476 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1213 10:29:44.895609  941476 command_runner.go:130] > # image_volumes = "mkdir"
	I1213 10:29:44.895614  941476 command_runner.go:130] > # Temporary directory to use for storing big files
	I1213 10:29:44.895618  941476 command_runner.go:130] > # big_files_temporary_dir = ""
	I1213 10:29:44.895623  941476 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1213 10:29:44.895630  941476 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1213 10:29:44.895634  941476 command_runner.go:130] > # auto_reload_registries = false
	I1213 10:29:44.895641  941476 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1213 10:29:44.895651  941476 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1213 10:29:44.895657  941476 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1213 10:29:44.895662  941476 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1213 10:29:44.895666  941476 command_runner.go:130] > # The mode of short name resolution.
	I1213 10:29:44.895672  941476 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1213 10:29:44.895679  941476 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1213 10:29:44.895684  941476 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1213 10:29:44.895688  941476 command_runner.go:130] > # short_name_mode = "enforcing"
	I1213 10:29:44.895697  941476 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1213 10:29:44.895704  941476 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1213 10:29:44.895708  941476 command_runner.go:130] > # oci_artifact_mount_support = true
	I1213 10:29:44.895715  941476 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1213 10:29:44.895718  941476 command_runner.go:130] > # CNI plugins.
	I1213 10:29:44.895721  941476 command_runner.go:130] > [crio.network]
	I1213 10:29:44.895727  941476 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1213 10:29:44.895732  941476 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1213 10:29:44.895735  941476 command_runner.go:130] > # cni_default_network = ""
	I1213 10:29:44.895741  941476 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1213 10:29:44.895745  941476 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1213 10:29:44.895751  941476 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1213 10:29:44.895754  941476 command_runner.go:130] > # plugin_dirs = [
	I1213 10:29:44.895758  941476 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1213 10:29:44.895760  941476 command_runner.go:130] > # ]
	I1213 10:29:44.895764  941476 command_runner.go:130] > # List of included pod metrics.
	I1213 10:29:44.895768  941476 command_runner.go:130] > # included_pod_metrics = [
	I1213 10:29:44.895771  941476 command_runner.go:130] > # ]
	I1213 10:29:44.895778  941476 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1213 10:29:44.895781  941476 command_runner.go:130] > [crio.metrics]
	I1213 10:29:44.895786  941476 command_runner.go:130] > # Globally enable or disable metrics support.
	I1213 10:29:44.895790  941476 command_runner.go:130] > # enable_metrics = false
	I1213 10:29:44.895794  941476 command_runner.go:130] > # Specify enabled metrics collectors.
	I1213 10:29:44.895799  941476 command_runner.go:130] > # Per default all metrics are enabled.
	I1213 10:29:44.895805  941476 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1213 10:29:44.895813  941476 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1213 10:29:44.895818  941476 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1213 10:29:44.895822  941476 command_runner.go:130] > # metrics_collectors = [
	I1213 10:29:44.895826  941476 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1213 10:29:44.895831  941476 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1213 10:29:44.895834  941476 command_runner.go:130] > # 	"containers_oom_total",
	I1213 10:29:44.895838  941476 command_runner.go:130] > # 	"processes_defunct",
	I1213 10:29:44.895842  941476 command_runner.go:130] > # 	"operations_total",
	I1213 10:29:44.895849  941476 command_runner.go:130] > # 	"operations_latency_seconds",
	I1213 10:29:44.895854  941476 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1213 10:29:44.895859  941476 command_runner.go:130] > # 	"operations_errors_total",
	I1213 10:29:44.895863  941476 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1213 10:29:44.895867  941476 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1213 10:29:44.895871  941476 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1213 10:29:44.895875  941476 command_runner.go:130] > # 	"image_pulls_success_total",
	I1213 10:29:44.895879  941476 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1213 10:29:44.895883  941476 command_runner.go:130] > # 	"containers_oom_count_total",
	I1213 10:29:44.895888  941476 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1213 10:29:44.895892  941476 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1213 10:29:44.895896  941476 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1213 10:29:44.895899  941476 command_runner.go:130] > # ]
	I1213 10:29:44.895905  941476 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1213 10:29:44.895908  941476 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1213 10:29:44.895913  941476 command_runner.go:130] > # The port on which the metrics server will listen.
	I1213 10:29:44.895917  941476 command_runner.go:130] > # metrics_port = 9090
	I1213 10:29:44.895922  941476 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1213 10:29:44.895925  941476 command_runner.go:130] > # metrics_socket = ""
	I1213 10:29:44.895930  941476 command_runner.go:130] > # The certificate for the secure metrics server.
	I1213 10:29:44.895937  941476 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1213 10:29:44.895943  941476 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1213 10:29:44.895947  941476 command_runner.go:130] > # certificate on any modification event.
	I1213 10:29:44.895951  941476 command_runner.go:130] > # metrics_cert = ""
	I1213 10:29:44.895955  941476 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1213 10:29:44.895960  941476 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1213 10:29:44.895963  941476 command_runner.go:130] > # metrics_key = ""
	I1213 10:29:44.895969  941476 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1213 10:29:44.895972  941476 command_runner.go:130] > [crio.tracing]
	I1213 10:29:44.895978  941476 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1213 10:29:44.895981  941476 command_runner.go:130] > # enable_tracing = false
	I1213 10:29:44.895987  941476 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1213 10:29:44.895991  941476 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1213 10:29:44.896000  941476 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1213 10:29:44.896007  941476 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1213 10:29:44.896011  941476 command_runner.go:130] > # CRI-O NRI configuration.
	I1213 10:29:44.896014  941476 command_runner.go:130] > [crio.nri]
	I1213 10:29:44.896018  941476 command_runner.go:130] > # Globally enable or disable NRI.
	I1213 10:29:44.896022  941476 command_runner.go:130] > # enable_nri = true
	I1213 10:29:44.896025  941476 command_runner.go:130] > # NRI socket to listen on.
	I1213 10:29:44.896030  941476 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1213 10:29:44.896034  941476 command_runner.go:130] > # NRI plugin directory to use.
	I1213 10:29:44.896038  941476 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1213 10:29:44.896043  941476 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1213 10:29:44.896051  941476 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1213 10:29:44.896057  941476 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1213 10:29:44.896113  941476 command_runner.go:130] > # nri_disable_connections = false
	I1213 10:29:44.896119  941476 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1213 10:29:44.896123  941476 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1213 10:29:44.896128  941476 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1213 10:29:44.896133  941476 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1213 10:29:44.896137  941476 command_runner.go:130] > # NRI default validator configuration.
	I1213 10:29:44.896144  941476 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1213 10:29:44.896150  941476 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1213 10:29:44.896155  941476 command_runner.go:130] > # can be restricted/rejected:
	I1213 10:29:44.896158  941476 command_runner.go:130] > # - OCI hook injection
	I1213 10:29:44.896163  941476 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1213 10:29:44.896167  941476 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1213 10:29:44.896172  941476 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1213 10:29:44.896176  941476 command_runner.go:130] > # - adjustment of linux namespaces
	I1213 10:29:44.896186  941476 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1213 10:29:44.896193  941476 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1213 10:29:44.896198  941476 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1213 10:29:44.896201  941476 command_runner.go:130] > #
	I1213 10:29:44.896205  941476 command_runner.go:130] > # [crio.nri.default_validator]
	I1213 10:29:44.896209  941476 command_runner.go:130] > # nri_enable_default_validator = false
	I1213 10:29:44.896218  941476 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1213 10:29:44.896223  941476 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1213 10:29:44.896229  941476 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1213 10:29:44.896234  941476 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1213 10:29:44.896239  941476 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1213 10:29:44.896243  941476 command_runner.go:130] > # nri_validator_required_plugins = [
	I1213 10:29:44.896245  941476 command_runner.go:130] > # ]
	I1213 10:29:44.896251  941476 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1213 10:29:44.896257  941476 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1213 10:29:44.896261  941476 command_runner.go:130] > [crio.stats]
	I1213 10:29:44.896267  941476 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1213 10:29:44.896272  941476 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1213 10:29:44.896276  941476 command_runner.go:130] > # stats_collection_period = 0
	I1213 10:29:44.896281  941476 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1213 10:29:44.896287  941476 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1213 10:29:44.896291  941476 command_runner.go:130] > # collection_period = 0
	I1213 10:29:44.896753  941476 command_runner.go:130] ! time="2025-12-13T10:29:44.865564739Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1213 10:29:44.896774  941476 command_runner.go:130] ! time="2025-12-13T10:29:44.865608538Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1213 10:29:44.896784  941476 command_runner.go:130] ! time="2025-12-13T10:29:44.865641285Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1213 10:29:44.896793  941476 command_runner.go:130] ! time="2025-12-13T10:29:44.86566636Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1213 10:29:44.896803  941476 command_runner.go:130] ! time="2025-12-13T10:29:44.865746328Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:29:44.896812  941476 command_runner.go:130] ! time="2025-12-13T10:29:44.866102466Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1213 10:29:44.896826  941476 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1213 10:29:44.896949  941476 cni.go:84] Creating CNI manager for ""
	I1213 10:29:44.896967  941476 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 10:29:44.896990  941476 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 10:29:44.897016  941476 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-200955 NodeName:functional-200955 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 10:29:44.897147  941476 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-200955"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 10:29:44.897221  941476 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 10:29:44.904800  941476 command_runner.go:130] > kubeadm
	I1213 10:29:44.904821  941476 command_runner.go:130] > kubectl
	I1213 10:29:44.904825  941476 command_runner.go:130] > kubelet
	I1213 10:29:44.905083  941476 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 10:29:44.905149  941476 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 10:29:44.912855  941476 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1213 10:29:44.926542  941476 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 10:29:44.940018  941476 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1213 10:29:44.953058  941476 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1213 10:29:44.956927  941476 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1213 10:29:44.957067  941476 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:29:45.090811  941476 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 10:29:45.111343  941476 certs.go:69] Setting up /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955 for IP: 192.168.49.2
	I1213 10:29:45.111425  941476 certs.go:195] generating shared ca certs ...
	I1213 10:29:45.111459  941476 certs.go:227] acquiring lock for ca certs: {Name:mk8a4f8a0a31c02fdf751ce601bdbbea6f5a03e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:29:45.111653  941476 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22128-904040/.minikube/ca.key
	I1213 10:29:45.111736  941476 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22128-904040/.minikube/proxy-client-ca.key
	I1213 10:29:45.111762  941476 certs.go:257] generating profile certs ...
	I1213 10:29:45.111936  941476 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/client.key
	I1213 10:29:45.112043  941476 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/apiserver.key.8da389ed
	I1213 10:29:45.112141  941476 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/proxy-client.key
	I1213 10:29:45.112183  941476 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-904040/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1213 10:29:45.112222  941476 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-904040/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1213 10:29:45.112262  941476 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-904040/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1213 10:29:45.112293  941476 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-904040/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1213 10:29:45.112328  941476 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1213 10:29:45.112371  941476 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1213 10:29:45.112404  941476 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1213 10:29:45.112444  941476 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1213 10:29:45.112521  941476 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/907484.pem (1338 bytes)
	W1213 10:29:45.112600  941476 certs.go:480] ignoring /home/jenkins/minikube-integration/22128-904040/.minikube/certs/907484_empty.pem, impossibly tiny 0 bytes
	I1213 10:29:45.112629  941476 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 10:29:45.112687  941476 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca.pem (1082 bytes)
	I1213 10:29:45.112733  941476 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/cert.pem (1123 bytes)
	I1213 10:29:45.112831  941476 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/key.pem (1675 bytes)
	I1213 10:29:45.113060  941476 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-904040/.minikube/files/etc/ssl/certs/9074842.pem (1708 bytes)
	I1213 10:29:45.113147  941476 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/907484.pem -> /usr/share/ca-certificates/907484.pem
	I1213 10:29:45.113186  941476 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-904040/.minikube/files/etc/ssl/certs/9074842.pem -> /usr/share/ca-certificates/9074842.pem
	I1213 10:29:45.113227  941476 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-904040/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:29:45.113935  941476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 10:29:45.163864  941476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 10:29:45.189286  941476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 10:29:45.237278  941476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 10:29:45.263467  941476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 10:29:45.289513  941476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 10:29:45.309018  941476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 10:29:45.329141  941476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 10:29:45.347665  941476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/certs/907484.pem --> /usr/share/ca-certificates/907484.pem (1338 bytes)
	I1213 10:29:45.365433  941476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/files/etc/ssl/certs/9074842.pem --> /usr/share/ca-certificates/9074842.pem (1708 bytes)
	I1213 10:29:45.383209  941476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 10:29:45.402144  941476 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 10:29:45.415520  941476 ssh_runner.go:195] Run: openssl version
	I1213 10:29:45.421431  941476 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1213 10:29:45.421939  941476 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/907484.pem
	I1213 10:29:45.429504  941476 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/907484.pem /etc/ssl/certs/907484.pem
	I1213 10:29:45.436991  941476 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/907484.pem
	I1213 10:29:45.440561  941476 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 13 10:21 /usr/share/ca-certificates/907484.pem
	I1213 10:29:45.440796  941476 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 10:21 /usr/share/ca-certificates/907484.pem
	I1213 10:29:45.440864  941476 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/907484.pem
	I1213 10:29:45.483791  941476 command_runner.go:130] > 51391683
	I1213 10:29:45.484209  941476 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 10:29:45.491520  941476 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9074842.pem
	I1213 10:29:45.498932  941476 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9074842.pem /etc/ssl/certs/9074842.pem
	I1213 10:29:45.509018  941476 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9074842.pem
	I1213 10:29:45.513215  941476 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 13 10:21 /usr/share/ca-certificates/9074842.pem
	I1213 10:29:45.513301  941476 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 10:21 /usr/share/ca-certificates/9074842.pem
	I1213 10:29:45.513386  941476 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9074842.pem
	I1213 10:29:45.554662  941476 command_runner.go:130] > 3ec20f2e
	I1213 10:29:45.555104  941476 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 10:29:45.562598  941476 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:29:45.570035  941476 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 10:29:45.578308  941476 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:29:45.582322  941476 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 13 10:11 /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:29:45.582399  941476 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 10:11 /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:29:45.582459  941476 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:29:45.623357  941476 command_runner.go:130] > b5213941
	I1213 10:29:45.623846  941476 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 10:29:45.631423  941476 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 10:29:45.635203  941476 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 10:29:45.635226  941476 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1213 10:29:45.635232  941476 command_runner.go:130] > Device: 259,1	Inode: 1052598     Links: 1
	I1213 10:29:45.635239  941476 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1213 10:29:45.635245  941476 command_runner.go:130] > Access: 2025-12-13 10:25:37.832562674 +0000
	I1213 10:29:45.635250  941476 command_runner.go:130] > Modify: 2025-12-13 10:21:33.766304384 +0000
	I1213 10:29:45.635255  941476 command_runner.go:130] > Change: 2025-12-13 10:21:33.766304384 +0000
	I1213 10:29:45.635260  941476 command_runner.go:130] >  Birth: 2025-12-13 10:21:33.766304384 +0000
	I1213 10:29:45.635337  941476 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 10:29:45.676331  941476 command_runner.go:130] > Certificate will not expire
	I1213 10:29:45.676780  941476 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 10:29:45.719984  941476 command_runner.go:130] > Certificate will not expire
	I1213 10:29:45.720440  941476 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 10:29:45.763044  941476 command_runner.go:130] > Certificate will not expire
	I1213 10:29:45.763152  941476 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 10:29:45.804752  941476 command_runner.go:130] > Certificate will not expire
	I1213 10:29:45.805187  941476 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 10:29:45.846806  941476 command_runner.go:130] > Certificate will not expire
	I1213 10:29:45.847203  941476 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 10:29:45.898203  941476 command_runner.go:130] > Certificate will not expire
	I1213 10:29:45.898680  941476 kubeadm.go:401] StartCluster: {Name:functional-200955 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-200955 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:29:45.898809  941476 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 10:29:45.898933  941476 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 10:29:45.924889  941476 cri.go:89] found id: ""
	I1213 10:29:45.924989  941476 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 10:29:45.932161  941476 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1213 10:29:45.932226  941476 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1213 10:29:45.932248  941476 command_runner.go:130] > /var/lib/minikube/etcd:
	I1213 10:29:45.933123  941476 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 10:29:45.933177  941476 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 10:29:45.933244  941476 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 10:29:45.940638  941476 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 10:29:45.941072  941476 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-200955" does not appear in /home/jenkins/minikube-integration/22128-904040/kubeconfig
	I1213 10:29:45.941185  941476 kubeconfig.go:62] /home/jenkins/minikube-integration/22128-904040/kubeconfig needs updating (will repair): [kubeconfig missing "functional-200955" cluster setting kubeconfig missing "functional-200955" context setting]
	I1213 10:29:45.941452  941476 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-904040/kubeconfig: {Name:mk623f80012ba74b924bdfcf4e2ec5178c2702f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:29:45.941955  941476 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22128-904040/kubeconfig
	I1213 10:29:45.942103  941476 kapi.go:59] client config for functional-200955: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/client.crt", KeyFile:"/home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/client.key", CAFile:"/home/jenkins/minikube-integration/22128-904040/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 10:29:45.942644  941476 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1213 10:29:45.942668  941476 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1213 10:29:45.942678  941476 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1213 10:29:45.942683  941476 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1213 10:29:45.942687  941476 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1213 10:29:45.942727  941476 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1213 10:29:45.943068  941476 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 10:29:45.951089  941476 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1213 10:29:45.951121  941476 kubeadm.go:602] duration metric: took 17.93243ms to restartPrimaryControlPlane
	I1213 10:29:45.951143  941476 kubeadm.go:403] duration metric: took 52.461003ms to StartCluster
	I1213 10:29:45.951159  941476 settings.go:142] acquiring lock: {Name:mk93988d167ba25bb331a8426f9b2f4ef25dd844 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:29:45.951223  941476 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22128-904040/kubeconfig
	I1213 10:29:45.951796  941476 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-904040/kubeconfig: {Name:mk623f80012ba74b924bdfcf4e2ec5178c2702f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:29:45.951989  941476 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 10:29:45.952368  941476 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 10:29:45.952448  941476 addons.go:70] Setting storage-provisioner=true in profile "functional-200955"
	I1213 10:29:45.952463  941476 addons.go:239] Setting addon storage-provisioner=true in "functional-200955"
	I1213 10:29:45.952488  941476 host.go:66] Checking if "functional-200955" exists ...
	I1213 10:29:45.952566  941476 config.go:182] Loaded profile config "functional-200955": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 10:29:45.952610  941476 addons.go:70] Setting default-storageclass=true in profile "functional-200955"
	I1213 10:29:45.952623  941476 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-200955"
	I1213 10:29:45.952911  941476 cli_runner.go:164] Run: docker container inspect functional-200955 --format={{.State.Status}}
	I1213 10:29:45.952951  941476 cli_runner.go:164] Run: docker container inspect functional-200955 --format={{.State.Status}}
	I1213 10:29:45.958523  941476 out.go:179] * Verifying Kubernetes components...
	I1213 10:29:45.963377  941476 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:29:45.989193  941476 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22128-904040/kubeconfig
	I1213 10:29:45.989357  941476 kapi.go:59] client config for functional-200955: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/client.crt", KeyFile:"/home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/client.key", CAFile:"/home/jenkins/minikube-integration/22128-904040/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 10:29:45.989643  941476 addons.go:239] Setting addon default-storageclass=true in "functional-200955"
	I1213 10:29:45.989674  941476 host.go:66] Checking if "functional-200955" exists ...
	I1213 10:29:45.990084  941476 cli_runner.go:164] Run: docker container inspect functional-200955 --format={{.State.Status}}
	I1213 10:29:45.996374  941476 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 10:29:45.999301  941476 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:29:45.999325  941476 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 10:29:45.999389  941476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-200955
	I1213 10:29:46.025120  941476 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 10:29:46.025146  941476 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 10:29:46.025210  941476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-200955
	I1213 10:29:46.047237  941476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33523 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/functional-200955/id_rsa Username:docker}
	I1213 10:29:46.065614  941476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33523 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/functional-200955/id_rsa Username:docker}
	I1213 10:29:46.182514  941476 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 10:29:46.188367  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:29:46.228034  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:29:46.975760  941476 node_ready.go:35] waiting up to 6m0s for node "functional-200955" to be "Ready" ...
	I1213 10:29:46.975884  941476 type.go:168] "Request Body" body=""
	I1213 10:29:46.975940  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:46.976159  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:29:46.976214  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:46.976242  941476 retry.go:31] will retry after 310.714541ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:46.976276  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:29:46.976296  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:46.976306  941476 retry.go:31] will retry after 212.322267ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:46.976367  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:29:47.188794  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:29:47.245508  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:29:47.249207  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:47.249253  941476 retry.go:31] will retry after 232.449188ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:47.287510  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:29:47.352377  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:29:47.355988  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:47.356022  941476 retry.go:31] will retry after 216.845813ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:47.476461  941476 type.go:168] "Request Body" body=""
	I1213 10:29:47.476540  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:47.476866  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:29:47.482125  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:29:47.540633  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:29:47.540674  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:47.540713  941476 retry.go:31] will retry after 621.150122ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:47.573847  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:29:47.632148  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:29:47.632198  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:47.632239  941476 retry.go:31] will retry after 652.105841ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:47.976625  941476 type.go:168] "Request Body" body=""
	I1213 10:29:47.976714  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:47.977047  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:29:48.162374  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:29:48.224014  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:29:48.224050  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:48.224096  941476 retry.go:31] will retry after 486.360631ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:48.285241  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:29:48.341512  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:29:48.345196  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:48.345232  941476 retry.go:31] will retry after 851.054667ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:48.476501  941476 type.go:168] "Request Body" body=""
	I1213 10:29:48.476654  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:48.477264  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:29:48.710766  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:29:48.774597  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:29:48.774656  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:48.774677  941476 retry.go:31] will retry after 1.42902923s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:48.976030  941476 type.go:168] "Request Body" body=""
	I1213 10:29:48.976124  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:48.976473  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:29:48.976568  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:29:49.197102  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:29:49.269601  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:29:49.269709  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:49.269757  941476 retry.go:31] will retry after 1.296706305s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:49.476109  941476 type.go:168] "Request Body" body=""
	I1213 10:29:49.476203  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:49.476573  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:29:49.976081  941476 type.go:168] "Request Body" body=""
	I1213 10:29:49.976179  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:49.976442  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:29:50.204048  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:29:50.263787  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:29:50.263835  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:50.263857  941476 retry.go:31] will retry after 2.257067811s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:50.476081  941476 type.go:168] "Request Body" body=""
	I1213 10:29:50.476171  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:50.476455  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:29:50.566907  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:29:50.629271  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:29:50.629314  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:50.629333  941476 retry.go:31] will retry after 1.765407868s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:50.976841  941476 type.go:168] "Request Body" body=""
	I1213 10:29:50.976923  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:50.977217  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:29:50.977269  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:29:51.475933  941476 type.go:168] "Request Body" body=""
	I1213 10:29:51.476012  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:51.476290  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:29:51.976028  941476 type.go:168] "Request Body" body=""
	I1213 10:29:51.976124  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:51.976454  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:29:52.395020  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:29:52.456823  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:29:52.456875  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:52.456899  941476 retry.go:31] will retry after 1.561909689s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:52.476063  941476 type.go:168] "Request Body" body=""
	I1213 10:29:52.476147  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:52.476449  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:29:52.521915  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:29:52.578203  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:29:52.581870  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:52.581904  941476 retry.go:31] will retry after 3.834800834s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:52.976296  941476 type.go:168] "Request Body" body=""
	I1213 10:29:52.976371  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:52.976640  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:29:53.476019  941476 type.go:168] "Request Body" body=""
	I1213 10:29:53.476103  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:53.476429  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:29:53.476481  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:29:53.976156  941476 type.go:168] "Request Body" body=""
	I1213 10:29:53.976238  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:53.976665  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:29:54.019913  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:29:54.081795  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:29:54.081851  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:54.081875  941476 retry.go:31] will retry after 4.858817388s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:54.476105  941476 type.go:168] "Request Body" body=""
	I1213 10:29:54.476182  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:54.476432  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:29:54.976004  941476 type.go:168] "Request Body" body=""
	I1213 10:29:54.976093  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:54.976415  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:29:55.476129  941476 type.go:168] "Request Body" body=""
	I1213 10:29:55.476226  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:55.476527  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:29:55.476588  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:29:55.976456  941476 type.go:168] "Request Body" body=""
	I1213 10:29:55.976520  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:55.976761  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:29:56.417572  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:29:56.476035  941476 type.go:168] "Request Body" body=""
	I1213 10:29:56.476118  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:56.476423  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:56.476511  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:29:56.480436  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:56.480483  941476 retry.go:31] will retry after 4.792687173s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:56.976051  941476 type.go:168] "Request Body" body=""
	I1213 10:29:56.976145  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:56.976494  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:29:57.475977  941476 type.go:168] "Request Body" body=""
	I1213 10:29:57.476051  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:57.476378  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:29:57.976104  941476 type.go:168] "Request Body" body=""
	I1213 10:29:57.976249  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:57.976601  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:29:57.976655  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:29:58.476178  941476 type.go:168] "Request Body" body=""
	I1213 10:29:58.476277  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:58.476612  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:29:58.940954  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:29:58.976372  941476 type.go:168] "Request Body" body=""
	I1213 10:29:58.976458  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:58.976716  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:29:59.010699  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:29:59.010740  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:59.010759  941476 retry.go:31] will retry after 7.734765537s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:59.476520  941476 type.go:168] "Request Body" body=""
	I1213 10:29:59.476594  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:59.476930  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:29:59.976794  941476 type.go:168] "Request Body" body=""
	I1213 10:29:59.976872  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:59.977198  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:29:59.977252  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:00.476972  941476 type.go:168] "Request Body" body=""
	I1213 10:30:00.477066  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:00.477383  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:00.976114  941476 type.go:168] "Request Body" body=""
	I1213 10:30:00.976196  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:00.976547  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:01.274155  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:30:01.347774  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:30:01.347813  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:30:01.347834  941476 retry.go:31] will retry after 9.325183697s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:30:01.478515  941476 type.go:168] "Request Body" body=""
	I1213 10:30:01.478628  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:01.479014  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:01.976839  941476 type.go:168] "Request Body" body=""
	I1213 10:30:01.976947  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:01.977331  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:01.977404  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:02.476030  941476 type.go:168] "Request Body" body=""
	I1213 10:30:02.476139  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:02.476537  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:02.976170  941476 type.go:168] "Request Body" body=""
	I1213 10:30:02.976275  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:02.976649  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:03.476192  941476 type.go:168] "Request Body" body=""
	I1213 10:30:03.476276  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:03.476538  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:03.976228  941476 type.go:168] "Request Body" body=""
	I1213 10:30:03.976352  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:03.976726  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:04.476318  941476 type.go:168] "Request Body" body=""
	I1213 10:30:04.476410  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:04.476740  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:04.476799  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:04.976561  941476 type.go:168] "Request Body" body=""
	I1213 10:30:04.976631  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:04.976878  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:05.476699  941476 type.go:168] "Request Body" body=""
	I1213 10:30:05.476787  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:05.477120  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:05.977016  941476 type.go:168] "Request Body" body=""
	I1213 10:30:05.977144  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:05.977510  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:06.475991  941476 type.go:168] "Request Body" body=""
	I1213 10:30:06.476060  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:06.476330  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:06.746112  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:30:06.805144  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:30:06.808651  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:30:06.808685  941476 retry.go:31] will retry after 7.088599712s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:30:06.976026  941476 type.go:168] "Request Body" body=""
	I1213 10:30:06.976116  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:06.976437  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:06.976507  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:07.476202  941476 type.go:168] "Request Body" body=""
	I1213 10:30:07.476279  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:07.476634  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:07.976084  941476 type.go:168] "Request Body" body=""
	I1213 10:30:07.976170  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:07.976444  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:08.476024  941476 type.go:168] "Request Body" body=""
	I1213 10:30:08.476153  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:08.476482  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:08.976213  941476 type.go:168] "Request Body" body=""
	I1213 10:30:08.976308  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:08.976642  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:08.976701  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:09.476115  941476 type.go:168] "Request Body" body=""
	I1213 10:30:09.476212  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:09.476464  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:09.976032  941476 type.go:168] "Request Body" body=""
	I1213 10:30:09.976107  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:09.976492  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:10.476265  941476 type.go:168] "Request Body" body=""
	I1213 10:30:10.476368  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:10.476715  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:10.673230  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:30:10.732312  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:30:10.736051  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:30:10.736087  941476 retry.go:31] will retry after 8.123592788s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:30:10.976475  941476 type.go:168] "Request Body" body=""
	I1213 10:30:10.976550  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:10.976847  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:10.976888  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:11.476725  941476 type.go:168] "Request Body" body=""
	I1213 10:30:11.476822  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:11.477169  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:11.976044  941476 type.go:168] "Request Body" body=""
	I1213 10:30:11.976120  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:11.976458  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:12.476202  941476 type.go:168] "Request Body" body=""
	I1213 10:30:12.476278  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:12.476542  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:12.976059  941476 type.go:168] "Request Body" body=""
	I1213 10:30:12.976141  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:12.976473  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:13.476058  941476 type.go:168] "Request Body" body=""
	I1213 10:30:13.476137  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:13.476490  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:13.476548  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:13.898101  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:30:13.964340  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:30:13.967836  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:30:13.967879  941476 retry.go:31] will retry after 8.492520723s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:30:13.975991  941476 type.go:168] "Request Body" body=""
	I1213 10:30:13.976068  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:13.976327  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:14.476033  941476 type.go:168] "Request Body" body=""
	I1213 10:30:14.476139  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:14.476442  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:14.976067  941476 type.go:168] "Request Body" body=""
	I1213 10:30:14.976142  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:14.976454  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:15.475986  941476 type.go:168] "Request Body" body=""
	I1213 10:30:15.476080  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:15.476459  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:15.975941  941476 type.go:168] "Request Body" body=""
	I1213 10:30:15.976026  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:15.976392  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:15.976452  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:16.476065  941476 type.go:168] "Request Body" body=""
	I1213 10:30:16.476159  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:16.476450  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:16.975992  941476 type.go:168] "Request Body" body=""
	I1213 10:30:16.976102  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:16.976412  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:17.476049  941476 type.go:168] "Request Body" body=""
	I1213 10:30:17.476174  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:17.476445  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:17.976100  941476 type.go:168] "Request Body" body=""
	I1213 10:30:17.976180  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:17.976600  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:17.976654  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:18.475986  941476 type.go:168] "Request Body" body=""
	I1213 10:30:18.476079  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:18.476393  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:18.859953  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:30:18.916800  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:30:18.920763  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:30:18.920813  941476 retry.go:31] will retry after 11.17407044s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:30:18.976006  941476 type.go:168] "Request Body" body=""
	I1213 10:30:18.976089  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:18.976434  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:19.476057  941476 type.go:168] "Request Body" body=""
	I1213 10:30:19.476156  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:19.476511  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:19.975977  941476 type.go:168] "Request Body" body=""
	I1213 10:30:19.976055  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:19.976310  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:20.476019  941476 type.go:168] "Request Body" body=""
	I1213 10:30:20.476128  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:20.476491  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:20.476556  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:20.976222  941476 type.go:168] "Request Body" body=""
	I1213 10:30:20.976298  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:20.976627  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:21.476132  941476 type.go:168] "Request Body" body=""
	I1213 10:30:21.476230  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:21.476520  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:21.976457  941476 type.go:168] "Request Body" body=""
	I1213 10:30:21.976534  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:21.976932  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:22.460571  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:30:22.476131  941476 type.go:168] "Request Body" body=""
	I1213 10:30:22.476203  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:22.476465  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:22.521379  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:30:22.525059  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:30:22.525092  941476 retry.go:31] will retry after 25.139993985s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:30:22.976652  941476 type.go:168] "Request Body" body=""
	I1213 10:30:22.976730  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:22.976986  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:22.977026  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:23.476843  941476 type.go:168] "Request Body" body=""
	I1213 10:30:23.476919  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:23.477283  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:23.975970  941476 type.go:168] "Request Body" body=""
	I1213 10:30:23.976053  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:23.976449  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:24.476161  941476 type.go:168] "Request Body" body=""
	I1213 10:30:24.476245  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:24.476513  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:24.976034  941476 type.go:168] "Request Body" body=""
	I1213 10:30:24.976147  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:24.976481  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:25.476274  941476 type.go:168] "Request Body" body=""
	I1213 10:30:25.476347  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:25.476670  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:25.476736  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:25.976627  941476 type.go:168] "Request Body" body=""
	I1213 10:30:25.976707  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:25.976951  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:26.476709  941476 type.go:168] "Request Body" body=""
	I1213 10:30:26.476781  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:26.477095  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:26.976010  941476 type.go:168] "Request Body" body=""
	I1213 10:30:26.976085  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:26.976390  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:27.475998  941476 type.go:168] "Request Body" body=""
	I1213 10:30:27.476197  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:27.476524  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:27.976149  941476 type.go:168] "Request Body" body=""
	I1213 10:30:27.976232  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:27.976587  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:27.976691  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:28.476061  941476 type.go:168] "Request Body" body=""
	I1213 10:30:28.476140  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:28.476466  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:28.975994  941476 type.go:168] "Request Body" body=""
	I1213 10:30:28.976062  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:28.976382  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:29.475998  941476 type.go:168] "Request Body" body=""
	I1213 10:30:29.476091  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:29.476426  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:29.976195  941476 type.go:168] "Request Body" body=""
	I1213 10:30:29.976285  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:29.976645  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:30.096045  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:30:30.160844  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:30:30.160891  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:30:30.160917  941476 retry.go:31] will retry after 23.835716192s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:30:30.476291  941476 type.go:168] "Request Body" body=""
	I1213 10:30:30.476381  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:30.476623  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:30.476662  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:30.976005  941476 type.go:168] "Request Body" body=""
	I1213 10:30:30.976079  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:30.976448  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:31.475993  941476 type.go:168] "Request Body" body=""
	I1213 10:30:31.476105  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:31.476447  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:31.976396  941476 type.go:168] "Request Body" body=""
	I1213 10:30:31.976460  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:31.976719  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:32.476555  941476 type.go:168] "Request Body" body=""
	I1213 10:30:32.476640  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:32.476947  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:32.476999  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:32.976734  941476 type.go:168] "Request Body" body=""
	I1213 10:30:32.976812  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:32.977150  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:33.476870  941476 type.go:168] "Request Body" body=""
	I1213 10:30:33.476937  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:33.477226  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:33.975973  941476 type.go:168] "Request Body" body=""
	I1213 10:30:33.976043  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:33.976419  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:34.476027  941476 type.go:168] "Request Body" body=""
	I1213 10:30:34.476101  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:34.476510  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:34.975996  941476 type.go:168] "Request Body" body=""
	I1213 10:30:34.976068  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:34.976382  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:34.976435  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:35.476032  941476 type.go:168] "Request Body" body=""
	I1213 10:30:35.476153  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:35.476480  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:35.975911  941476 type.go:168] "Request Body" body=""
	I1213 10:30:35.975996  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:35.976291  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:36.475989  941476 type.go:168] "Request Body" body=""
	I1213 10:30:36.476057  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:36.476387  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:36.976488  941476 type.go:168] "Request Body" body=""
	I1213 10:30:36.976570  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:36.976951  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:36.977012  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:37.476783  941476 type.go:168] "Request Body" body=""
	I1213 10:30:37.476896  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:37.477216  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:37.975936  941476 type.go:168] "Request Body" body=""
	I1213 10:30:37.976015  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:37.976268  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:38.476001  941476 type.go:168] "Request Body" body=""
	I1213 10:30:38.476091  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:38.476424  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:38.976022  941476 type.go:168] "Request Body" body=""
	I1213 10:30:38.976096  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:38.976428  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:39.476003  941476 type.go:168] "Request Body" body=""
	I1213 10:30:39.476084  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:39.476362  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:39.476402  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:39.975986  941476 type.go:168] "Request Body" body=""
	I1213 10:30:39.976076  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:39.976383  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:40.476036  941476 type.go:168] "Request Body" body=""
	I1213 10:30:40.476132  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:40.476454  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:40.976176  941476 type.go:168] "Request Body" body=""
	I1213 10:30:40.976252  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:40.976500  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:41.476026  941476 type.go:168] "Request Body" body=""
	I1213 10:30:41.476124  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:41.476456  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:41.476514  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:41.976469  941476 type.go:168] "Request Body" body=""
	I1213 10:30:41.976585  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:41.976895  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:42.476663  941476 type.go:168] "Request Body" body=""
	I1213 10:30:42.476728  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:42.477006  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:42.976839  941476 type.go:168] "Request Body" body=""
	I1213 10:30:42.976919  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:42.980297  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 10:30:43.476086  941476 type.go:168] "Request Body" body=""
	I1213 10:30:43.476186  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:43.476547  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:43.476623  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:43.976205  941476 type.go:168] "Request Body" body=""
	I1213 10:30:43.976276  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:43.976547  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:44.476019  941476 type.go:168] "Request Body" body=""
	I1213 10:30:44.476113  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:44.476466  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:44.976036  941476 type.go:168] "Request Body" body=""
	I1213 10:30:44.976111  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:44.976440  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:45.475987  941476 type.go:168] "Request Body" body=""
	I1213 10:30:45.476056  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:45.476331  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:45.975925  941476 type.go:168] "Request Body" body=""
	I1213 10:30:45.976003  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:45.976327  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:45.976382  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:46.476024  941476 type.go:168] "Request Body" body=""
	I1213 10:30:46.476103  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:46.476455  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:46.976213  941476 type.go:168] "Request Body" body=""
	I1213 10:30:46.976285  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:46.976538  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:47.475994  941476 type.go:168] "Request Body" body=""
	I1213 10:30:47.476069  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:47.476399  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:47.665860  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:30:47.731394  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:30:47.731441  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:30:47.731460  941476 retry.go:31] will retry after 19.194003802s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:30:47.975899  941476 type.go:168] "Request Body" body=""
	I1213 10:30:47.975974  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:47.976303  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:48.475998  941476 type.go:168] "Request Body" body=""
	I1213 10:30:48.476084  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:48.476410  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:48.476469  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:48.976020  941476 type.go:168] "Request Body" body=""
	I1213 10:30:48.976114  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:48.976440  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:49.476036  941476 type.go:168] "Request Body" body=""
	I1213 10:30:49.476112  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:49.476434  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:49.976100  941476 type.go:168] "Request Body" body=""
	I1213 10:30:49.976167  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:49.976437  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:50.476044  941476 type.go:168] "Request Body" body=""
	I1213 10:30:50.476115  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:50.476434  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:50.976044  941476 type.go:168] "Request Body" body=""
	I1213 10:30:50.976126  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:50.976458  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:50.976519  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:51.476190  941476 type.go:168] "Request Body" body=""
	I1213 10:30:51.476257  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:51.476607  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:51.976524  941476 type.go:168] "Request Body" body=""
	I1213 10:30:51.976618  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:51.976938  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:52.476676  941476 type.go:168] "Request Body" body=""
	I1213 10:30:52.476768  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:52.477095  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:52.976699  941476 type.go:168] "Request Body" body=""
	I1213 10:30:52.976774  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:52.977061  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:52.977104  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:53.476895  941476 type.go:168] "Request Body" body=""
	I1213 10:30:53.476971  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:53.477260  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:53.975931  941476 type.go:168] "Request Body" body=""
	I1213 10:30:53.976008  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:53.976338  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:53.997712  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:30:54.059604  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:30:54.063660  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:30:54.063694  941476 retry.go:31] will retry after 30.126310408s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:30:54.475958  941476 type.go:168] "Request Body" body=""
	I1213 10:30:54.476070  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:54.476392  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:54.976060  941476 type.go:168] "Request Body" body=""
	I1213 10:30:54.976148  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:54.976488  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:55.476185  941476 type.go:168] "Request Body" body=""
	I1213 10:30:55.476260  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:55.476583  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:55.476642  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:55.976527  941476 type.go:168] "Request Body" body=""
	I1213 10:30:55.976599  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:55.976860  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:56.476675  941476 type.go:168] "Request Body" body=""
	I1213 10:30:56.476769  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:56.477141  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:56.976045  941476 type.go:168] "Request Body" body=""
	I1213 10:30:56.976119  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:56.976449  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:57.476156  941476 type.go:168] "Request Body" body=""
	I1213 10:30:57.476236  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:57.476486  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:57.976022  941476 type.go:168] "Request Body" body=""
	I1213 10:30:57.976100  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:57.976440  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:57.976502  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:58.476042  941476 type.go:168] "Request Body" body=""
	I1213 10:30:58.476124  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:58.476455  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:58.976150  941476 type.go:168] "Request Body" body=""
	I1213 10:30:58.976235  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:58.976490  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:59.476182  941476 type.go:168] "Request Body" body=""
	I1213 10:30:59.476288  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:59.476621  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:59.976365  941476 type.go:168] "Request Body" body=""
	I1213 10:30:59.976444  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:59.976775  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:59.976845  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:00.476639  941476 type.go:168] "Request Body" body=""
	I1213 10:31:00.476719  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:00.477025  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:00.976827  941476 type.go:168] "Request Body" body=""
	I1213 10:31:00.976918  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:00.977328  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:01.475937  941476 type.go:168] "Request Body" body=""
	I1213 10:31:01.476035  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:01.476377  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:01.976053  941476 type.go:168] "Request Body" body=""
	I1213 10:31:01.976138  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:01.976399  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:02.476026  941476 type.go:168] "Request Body" body=""
	I1213 10:31:02.476102  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:02.476453  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:02.476508  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:02.976184  941476 type.go:168] "Request Body" body=""
	I1213 10:31:02.976261  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:02.976604  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:03.476321  941476 type.go:168] "Request Body" body=""
	I1213 10:31:03.476405  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:03.476656  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:03.975989  941476 type.go:168] "Request Body" body=""
	I1213 10:31:03.976062  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:03.976373  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:04.476027  941476 type.go:168] "Request Body" body=""
	I1213 10:31:04.476107  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:04.476440  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:04.976145  941476 type.go:168] "Request Body" body=""
	I1213 10:31:04.976215  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:04.976528  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:04.976587  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:05.476046  941476 type.go:168] "Request Body" body=""
	I1213 10:31:05.476128  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:05.476503  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:05.976329  941476 type.go:168] "Request Body" body=""
	I1213 10:31:05.976404  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:05.976818  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:06.476644  941476 type.go:168] "Request Body" body=""
	I1213 10:31:06.476727  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:06.476990  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:06.925824  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:31:06.976406  941476 type.go:168] "Request Body" body=""
	I1213 10:31:06.976485  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:06.976757  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:06.976800  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:06.991385  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:06.991438  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:06.991540  941476 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 10:31:07.476000  941476 type.go:168] "Request Body" body=""
	I1213 10:31:07.476110  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:07.476475  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:07.976033  941476 type.go:168] "Request Body" body=""
	I1213 10:31:07.976116  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:07.976413  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:08.476065  941476 type.go:168] "Request Body" body=""
	I1213 10:31:08.476162  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:08.476480  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:08.976217  941476 type.go:168] "Request Body" body=""
	I1213 10:31:08.976318  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:08.976675  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:09.476351  941476 type.go:168] "Request Body" body=""
	I1213 10:31:09.476424  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:09.476761  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:09.476820  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:09.976571  941476 type.go:168] "Request Body" body=""
	I1213 10:31:09.976678  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:09.977059  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:10.476721  941476 type.go:168] "Request Body" body=""
	I1213 10:31:10.476799  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:10.477208  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:10.975925  941476 type.go:168] "Request Body" body=""
	I1213 10:31:10.975997  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:10.976250  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:11.475973  941476 type.go:168] "Request Body" body=""
	I1213 10:31:11.476050  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:11.476395  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:11.976476  941476 type.go:168] "Request Body" body=""
	I1213 10:31:11.976551  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:11.976955  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:11.977016  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:12.476754  941476 type.go:168] "Request Body" body=""
	I1213 10:31:12.476839  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:12.477117  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:12.976506  941476 type.go:168] "Request Body" body=""
	I1213 10:31:12.976583  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:12.976915  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:13.476748  941476 type.go:168] "Request Body" body=""
	I1213 10:31:13.476846  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:13.477198  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:13.975894  941476 type.go:168] "Request Body" body=""
	I1213 10:31:13.975961  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:13.976227  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:14.475943  941476 type.go:168] "Request Body" body=""
	I1213 10:31:14.476062  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:14.476400  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:14.476469  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:14.976011  941476 type.go:168] "Request Body" body=""
	I1213 10:31:14.976112  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:14.976509  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:15.476219  941476 type.go:168] "Request Body" body=""
	I1213 10:31:15.476292  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:15.476567  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:15.976650  941476 type.go:168] "Request Body" body=""
	I1213 10:31:15.976734  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:15.977073  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:16.476854  941476 type.go:168] "Request Body" body=""
	I1213 10:31:16.476948  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:16.477273  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:16.477330  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:16.976000  941476 type.go:168] "Request Body" body=""
	I1213 10:31:16.976073  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:16.976427  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:17.476545  941476 type.go:168] "Request Body" body=""
	I1213 10:31:17.476677  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:17.477181  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:17.976852  941476 type.go:168] "Request Body" body=""
	I1213 10:31:17.976935  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:17.977261  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:18.475978  941476 type.go:168] "Request Body" body=""
	I1213 10:31:18.476056  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:18.476322  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:18.976069  941476 type.go:168] "Request Body" body=""
	I1213 10:31:18.976149  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:18.976500  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:18.976571  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:19.476245  941476 type.go:168] "Request Body" body=""
	I1213 10:31:19.476328  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:19.476669  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:19.976355  941476 type.go:168] "Request Body" body=""
	I1213 10:31:19.976423  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:19.976681  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:20.476070  941476 type.go:168] "Request Body" body=""
	I1213 10:31:20.476146  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:20.476464  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:20.976239  941476 type.go:168] "Request Body" body=""
	I1213 10:31:20.976313  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:20.976664  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:20.976722  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:21.476108  941476 type.go:168] "Request Body" body=""
	I1213 10:31:21.476196  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:21.476546  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:21.976459  941476 type.go:168] "Request Body" body=""
	I1213 10:31:21.976535  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:21.976854  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:22.476728  941476 type.go:168] "Request Body" body=""
	I1213 10:31:22.476820  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:22.477138  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:22.976866  941476 type.go:168] "Request Body" body=""
	I1213 10:31:22.976937  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:22.977188  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:22.977229  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:23.475910  941476 type.go:168] "Request Body" body=""
	I1213 10:31:23.475992  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:23.476337  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:23.976033  941476 type.go:168] "Request Body" body=""
	I1213 10:31:23.976146  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:23.976483  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:24.190915  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:31:24.248888  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:24.248934  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:24.249045  941476 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 10:31:24.254122  941476 out.go:179] * Enabled addons: 
	I1213 10:31:24.256914  941476 addons.go:530] duration metric: took 1m38.304545325s for enable addons: enabled=[]
	I1213 10:31:24.476214  941476 type.go:168] "Request Body" body=""
	I1213 10:31:24.476305  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:24.476571  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:24.976075  941476 type.go:168] "Request Body" body=""
	I1213 10:31:24.976150  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:24.976469  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:25.475994  941476 type.go:168] "Request Body" body=""
	I1213 10:31:25.476100  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:25.476424  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:25.476482  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:25.976304  941476 type.go:168] "Request Body" body=""
	I1213 10:31:25.976372  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:25.976622  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:26.476058  941476 type.go:168] "Request Body" body=""
	I1213 10:31:26.476134  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:26.476464  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:26.976032  941476 type.go:168] "Request Body" body=""
	I1213 10:31:26.976107  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:26.976412  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:27.475988  941476 type.go:168] "Request Body" body=""
	I1213 10:31:27.476056  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:27.476317  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:27.976108  941476 type.go:168] "Request Body" body=""
	I1213 10:31:27.976196  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:27.976535  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:27.976591  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:28.476254  941476 type.go:168] "Request Body" body=""
	I1213 10:31:28.476381  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:28.476716  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:28.975973  941476 type.go:168] "Request Body" body=""
	I1213 10:31:28.976047  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:28.976353  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:29.476048  941476 type.go:168] "Request Body" body=""
	I1213 10:31:29.476126  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:29.476474  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:29.976170  941476 type.go:168] "Request Body" body=""
	I1213 10:31:29.976247  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:29.976617  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:29.976678  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:30.476323  941476 type.go:168] "Request Body" body=""
	I1213 10:31:30.476391  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:30.476664  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:30.976054  941476 type.go:168] "Request Body" body=""
	I1213 10:31:30.976128  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:30.976456  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:31.476168  941476 type.go:168] "Request Body" body=""
	I1213 10:31:31.476269  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:31.476567  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:31.976505  941476 type.go:168] "Request Body" body=""
	I1213 10:31:31.976574  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:31.976850  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:31.976891  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:32.476715  941476 type.go:168] "Request Body" body=""
	I1213 10:31:32.476794  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:32.477154  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:32.976964  941476 type.go:168] "Request Body" body=""
	I1213 10:31:32.977041  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:32.977388  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:33.476013  941476 type.go:168] "Request Body" body=""
	I1213 10:31:33.476079  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:33.476329  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:33.976034  941476 type.go:168] "Request Body" body=""
	I1213 10:31:33.976119  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:33.976457  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:34.476014  941476 type.go:168] "Request Body" body=""
	I1213 10:31:34.476100  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:34.476438  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:34.476494  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:34.976011  941476 type.go:168] "Request Body" body=""
	I1213 10:31:34.976087  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:34.976342  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:35.476018  941476 type.go:168] "Request Body" body=""
	I1213 10:31:35.476143  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:35.476462  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:35.976397  941476 type.go:168] "Request Body" body=""
	I1213 10:31:35.976481  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:35.976852  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:36.476416  941476 type.go:168] "Request Body" body=""
	I1213 10:31:36.476490  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:36.476745  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:36.476785  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:36.976682  941476 type.go:168] "Request Body" body=""
	I1213 10:31:36.976776  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:36.977178  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:37.476965  941476 type.go:168] "Request Body" body=""
	I1213 10:31:37.477045  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:37.477383  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:37.976029  941476 type.go:168] "Request Body" body=""
	I1213 10:31:37.976095  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:37.976361  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:38.476025  941476 type.go:168] "Request Body" body=""
	I1213 10:31:38.476107  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:38.476445  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:38.975981  941476 type.go:168] "Request Body" body=""
	I1213 10:31:38.976069  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:38.976409  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:38.976469  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:39.476151  941476 type.go:168] "Request Body" body=""
	I1213 10:31:39.476225  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:39.476508  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:39.976053  941476 type.go:168] "Request Body" body=""
	I1213 10:31:39.976130  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:39.976448  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:40.476042  941476 type.go:168] "Request Body" body=""
	I1213 10:31:40.476119  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:40.476446  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:40.976091  941476 type.go:168] "Request Body" body=""
	I1213 10:31:40.976170  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:40.976430  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:41.476048  941476 type.go:168] "Request Body" body=""
	I1213 10:31:41.476125  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:41.476626  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:41.476675  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:41.976630  941476 type.go:168] "Request Body" body=""
	I1213 10:31:41.976743  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:41.977553  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:42.475972  941476 type.go:168] "Request Body" body=""
	I1213 10:31:42.476061  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:42.476364  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:42.976014  941476 type.go:168] "Request Body" body=""
	I1213 10:31:42.976100  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:42.976440  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:43.476024  941476 type.go:168] "Request Body" body=""
	I1213 10:31:43.476107  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:43.476429  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:43.975985  941476 type.go:168] "Request Body" body=""
	I1213 10:31:43.976054  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:43.976344  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:43.976397  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:44.476016  941476 type.go:168] "Request Body" body=""
	I1213 10:31:44.476093  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:44.476411  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:44.976030  941476 type.go:168] "Request Body" body=""
	I1213 10:31:44.976151  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:44.976503  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:45.476045  941476 type.go:168] "Request Body" body=""
	I1213 10:31:45.476120  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:45.476386  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:45.976018  941476 type.go:168] "Request Body" body=""
	I1213 10:31:45.976092  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:45.976393  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:45.976440  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:46.476013  941476 type.go:168] "Request Body" body=""
	I1213 10:31:46.476094  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:46.476429  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:46.975976  941476 type.go:168] "Request Body" body=""
	I1213 10:31:46.976048  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:46.976402  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:47.476027  941476 type.go:168] "Request Body" body=""
	I1213 10:31:47.476109  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:47.476419  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:47.976020  941476 type.go:168] "Request Body" body=""
	I1213 10:31:47.976095  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:47.976422  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:47.976480  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:48.476004  941476 type.go:168] "Request Body" body=""
	I1213 10:31:48.476083  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:48.476391  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:48.976026  941476 type.go:168] "Request Body" body=""
	I1213 10:31:48.976109  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:48.976439  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:49.476029  941476 type.go:168] "Request Body" body=""
	I1213 10:31:49.476115  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:49.476450  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:49.976130  941476 type.go:168] "Request Body" body=""
	I1213 10:31:49.976202  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:49.976477  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:49.976519  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:50.476169  941476 type.go:168] "Request Body" body=""
	I1213 10:31:50.476246  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:50.476586  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:50.976287  941476 type.go:168] "Request Body" body=""
	I1213 10:31:50.976360  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:50.976729  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:51.476495  941476 type.go:168] "Request Body" body=""
	I1213 10:31:51.476574  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:51.476839  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:51.976777  941476 type.go:168] "Request Body" body=""
	I1213 10:31:51.976892  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:51.977255  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:51.977312  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:52.476019  941476 type.go:168] "Request Body" body=""
	I1213 10:31:52.476115  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:52.476505  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:52.975986  941476 type.go:168] "Request Body" body=""
	I1213 10:31:52.976066  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:52.976377  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:53.476003  941476 type.go:168] "Request Body" body=""
	I1213 10:31:53.476081  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:53.476419  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:53.976122  941476 type.go:168] "Request Body" body=""
	I1213 10:31:53.976204  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:53.976539  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:54.476283  941476 type.go:168] "Request Body" body=""
	I1213 10:31:54.476358  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:54.476609  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:54.476652  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:54.976007  941476 type.go:168] "Request Body" body=""
	I1213 10:31:54.976081  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:54.976403  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:55.476020  941476 type.go:168] "Request Body" body=""
	I1213 10:31:55.476101  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:55.476465  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:55.976175  941476 type.go:168] "Request Body" body=""
	I1213 10:31:55.976246  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:55.976517  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:56.476006  941476 type.go:168] "Request Body" body=""
	I1213 10:31:56.476086  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:56.476452  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:56.976011  941476 type.go:168] "Request Body" body=""
	I1213 10:31:56.976090  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:56.976453  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:56.976513  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:57.476145  941476 type.go:168] "Request Body" body=""
	I1213 10:31:57.476215  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:57.476478  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:57.976009  941476 type.go:168] "Request Body" body=""
	I1213 10:31:57.976085  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:57.976451  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:58.476036  941476 type.go:168] "Request Body" body=""
	I1213 10:31:58.476114  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:58.476420  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:58.976112  941476 type.go:168] "Request Body" body=""
	I1213 10:31:58.976184  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:58.976451  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:59.476021  941476 type.go:168] "Request Body" body=""
	I1213 10:31:59.476097  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:59.476444  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:59.476501  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:59.976030  941476 type.go:168] "Request Body" body=""
	I1213 10:31:59.976103  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:59.976445  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:00.476019  941476 type.go:168] "Request Body" body=""
	I1213 10:32:00.476100  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:00.476422  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:00.976042  941476 type.go:168] "Request Body" body=""
	I1213 10:32:00.976122  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:00.976457  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:01.476038  941476 type.go:168] "Request Body" body=""
	I1213 10:32:01.476135  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:01.476461  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:01.476525  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:01.976433  941476 type.go:168] "Request Body" body=""
	I1213 10:32:01.976500  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:01.976760  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:02.476646  941476 type.go:168] "Request Body" body=""
	I1213 10:32:02.476736  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:02.477125  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:02.976957  941476 type.go:168] "Request Body" body=""
	I1213 10:32:02.977037  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:02.977386  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:03.476003  941476 type.go:168] "Request Body" body=""
	I1213 10:32:03.476067  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:03.476327  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:03.976021  941476 type.go:168] "Request Body" body=""
	I1213 10:32:03.976099  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:03.976425  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:03.976487  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:04.476032  941476 type.go:168] "Request Body" body=""
	I1213 10:32:04.476118  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:04.476477  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:04.976189  941476 type.go:168] "Request Body" body=""
	I1213 10:32:04.976259  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:04.976524  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:05.476054  941476 type.go:168] "Request Body" body=""
	I1213 10:32:05.476131  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:05.476494  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:05.976279  941476 type.go:168] "Request Body" body=""
	I1213 10:32:05.976358  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:05.976703  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:05.976759  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:06.476417  941476 type.go:168] "Request Body" body=""
	I1213 10:32:06.476497  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:06.476760  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:06.976645  941476 type.go:168] "Request Body" body=""
	I1213 10:32:06.976724  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:06.977077  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:07.476899  941476 type.go:168] "Request Body" body=""
	I1213 10:32:07.476981  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:07.477364  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:07.976070  941476 type.go:168] "Request Body" body=""
	I1213 10:32:07.976148  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:07.976442  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:08.476070  941476 type.go:168] "Request Body" body=""
	I1213 10:32:08.476152  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:08.476469  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:08.476525  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:08.976049  941476 type.go:168] "Request Body" body=""
	I1213 10:32:08.976129  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:08.976453  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:09.475983  941476 type.go:168] "Request Body" body=""
	I1213 10:32:09.476056  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:09.476367  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:09.976056  941476 type.go:168] "Request Body" body=""
	I1213 10:32:09.976139  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:09.976488  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:10.476201  941476 type.go:168] "Request Body" body=""
	I1213 10:32:10.476278  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:10.476604  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:10.476662  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:10.975985  941476 type.go:168] "Request Body" body=""
	I1213 10:32:10.976066  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:10.976386  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:11.476030  941476 type.go:168] "Request Body" body=""
	I1213 10:32:11.476107  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:11.476435  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:11.976014  941476 type.go:168] "Request Body" body=""
	I1213 10:32:11.976091  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:11.976414  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:12.475989  941476 type.go:168] "Request Body" body=""
	I1213 10:32:12.476059  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:12.476328  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:12.976032  941476 type.go:168] "Request Body" body=""
	I1213 10:32:12.976113  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:12.976433  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:12.976487  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:13.476035  941476 type.go:168] "Request Body" body=""
	I1213 10:32:13.476108  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:13.476457  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:13.976139  941476 type.go:168] "Request Body" body=""
	I1213 10:32:13.976217  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:13.976477  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:14.476065  941476 type.go:168] "Request Body" body=""
	I1213 10:32:14.476149  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:14.476488  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:14.976200  941476 type.go:168] "Request Body" body=""
	I1213 10:32:14.976280  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:14.976630  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:14.976691  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:15.476331  941476 type.go:168] "Request Body" body=""
	I1213 10:32:15.476407  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:15.476718  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:15.976843  941476 type.go:168] "Request Body" body=""
	I1213 10:32:15.976916  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:15.977265  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:16.476944  941476 type.go:168] "Request Body" body=""
	I1213 10:32:16.477018  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:16.477394  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:16.976098  941476 type.go:168] "Request Body" body=""
	I1213 10:32:16.976173  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:16.976437  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:17.476036  941476 type.go:168] "Request Body" body=""
	I1213 10:32:17.476113  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:17.476455  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:17.476515  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:17.976191  941476 type.go:168] "Request Body" body=""
	I1213 10:32:17.976268  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:17.976582  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:18.475997  941476 type.go:168] "Request Body" body=""
	I1213 10:32:18.476079  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:18.476340  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:18.976113  941476 type.go:168] "Request Body" body=""
	I1213 10:32:18.976206  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:18.976563  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:19.476049  941476 type.go:168] "Request Body" body=""
	I1213 10:32:19.476129  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:19.476456  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:19.976098  941476 type.go:168] "Request Body" body=""
	I1213 10:32:19.976166  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:19.976467  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:19.976522  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:20.476043  941476 type.go:168] "Request Body" body=""
	I1213 10:32:20.476118  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:20.476441  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:20.976163  941476 type.go:168] "Request Body" body=""
	I1213 10:32:20.976242  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:20.976531  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:21.475975  941476 type.go:168] "Request Body" body=""
	I1213 10:32:21.476045  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:21.476354  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:21.976036  941476 type.go:168] "Request Body" body=""
	I1213 10:32:21.976111  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:21.976471  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:22.476157  941476 type.go:168] "Request Body" body=""
	I1213 10:32:22.476236  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:22.476595  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:22.476649  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:22.975989  941476 type.go:168] "Request Body" body=""
	I1213 10:32:22.976063  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:22.976350  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:23.476043  941476 type.go:168] "Request Body" body=""
	I1213 10:32:23.476117  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:23.476465  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:23.976206  941476 type.go:168] "Request Body" body=""
	I1213 10:32:23.976283  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:23.976637  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:24.475985  941476 type.go:168] "Request Body" body=""
	I1213 10:32:24.476065  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:24.476346  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:24.976054  941476 type.go:168] "Request Body" body=""
	I1213 10:32:24.976136  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:24.976464  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:24.976520  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:25.476178  941476 type.go:168] "Request Body" body=""
	I1213 10:32:25.476258  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:25.476612  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:25.976593  941476 type.go:168] "Request Body" body=""
	I1213 10:32:25.976662  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:25.976936  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:26.476747  941476 type.go:168] "Request Body" body=""
	I1213 10:32:26.476821  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:26.477090  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:26.975948  941476 type.go:168] "Request Body" body=""
	I1213 10:32:26.976024  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:26.976402  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:27.476084  941476 type.go:168] "Request Body" body=""
	I1213 10:32:27.476158  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:27.476433  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:27.476474  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:27.976004  941476 type.go:168] "Request Body" body=""
	I1213 10:32:27.976087  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:27.976410  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:28.476155  941476 type.go:168] "Request Body" body=""
	I1213 10:32:28.476244  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:28.476588  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:28.976255  941476 type.go:168] "Request Body" body=""
	I1213 10:32:28.976331  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:28.976594  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:29.476028  941476 type.go:168] "Request Body" body=""
	I1213 10:32:29.476107  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:29.476476  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:29.476531  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:29.976055  941476 type.go:168] "Request Body" body=""
	I1213 10:32:29.976132  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:29.976460  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:30.475985  941476 type.go:168] "Request Body" body=""
	I1213 10:32:30.476059  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:30.476378  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:30.976032  941476 type.go:168] "Request Body" body=""
	I1213 10:32:30.976108  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:30.976436  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:31.476042  941476 type.go:168] "Request Body" body=""
	I1213 10:32:31.476119  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:31.476446  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:31.976398  941476 type.go:168] "Request Body" body=""
	I1213 10:32:31.976466  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:31.976719  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:31.976758  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:32.476588  941476 type.go:168] "Request Body" body=""
	I1213 10:32:32.476670  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:32.477064  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:32.976842  941476 type.go:168] "Request Body" body=""
	I1213 10:32:32.976917  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:32.977255  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:33.475960  941476 type.go:168] "Request Body" body=""
	I1213 10:32:33.476032  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:33.476294  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:33.975991  941476 type.go:168] "Request Body" body=""
	I1213 10:32:33.976070  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:33.976448  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:34.476153  941476 type.go:168] "Request Body" body=""
	I1213 10:32:34.476241  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:34.476568  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:34.476624  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:34.976261  941476 type.go:168] "Request Body" body=""
	I1213 10:32:34.976336  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:34.976618  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:35.476037  941476 type.go:168] "Request Body" body=""
	I1213 10:32:35.476116  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:35.476453  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:35.976396  941476 type.go:168] "Request Body" body=""
	I1213 10:32:35.976472  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:35.976804  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:36.476554  941476 type.go:168] "Request Body" body=""
	I1213 10:32:36.476624  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:36.476895  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:36.476937  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:36.976884  941476 type.go:168] "Request Body" body=""
	I1213 10:32:36.976958  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:36.977293  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:37.476031  941476 type.go:168] "Request Body" body=""
	I1213 10:32:37.476114  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:37.476465  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:37.976004  941476 type.go:168] "Request Body" body=""
	I1213 10:32:37.976074  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:37.976340  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:38.476062  941476 type.go:168] "Request Body" body=""
	I1213 10:32:38.476138  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:38.476437  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:38.976005  941476 type.go:168] "Request Body" body=""
	I1213 10:32:38.976078  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:38.976403  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:38.976454  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:39.475982  941476 type.go:168] "Request Body" body=""
	I1213 10:32:39.476059  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:39.476428  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:39.976002  941476 type.go:168] "Request Body" body=""
	I1213 10:32:39.976082  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:39.976414  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:40.476038  941476 type.go:168] "Request Body" body=""
	I1213 10:32:40.476118  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:40.476462  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:40.976166  941476 type.go:168] "Request Body" body=""
	I1213 10:32:40.976245  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:40.976502  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:40.976544  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:41.476000  941476 type.go:168] "Request Body" body=""
	I1213 10:32:41.476073  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:41.476433  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:41.976208  941476 type.go:168] "Request Body" body=""
	I1213 10:32:41.976289  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:41.976643  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:42.475983  941476 type.go:168] "Request Body" body=""
	I1213 10:32:42.476059  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:42.476353  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:42.976069  941476 type.go:168] "Request Body" body=""
	I1213 10:32:42.976137  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:42.976430  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:43.476306  941476 type.go:168] "Request Body" body=""
	I1213 10:32:43.476396  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:43.476750  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:43.476809  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:43.976720  941476 type.go:168] "Request Body" body=""
	I1213 10:32:43.976798  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:43.977089  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:44.477009  941476 type.go:168] "Request Body" body=""
	I1213 10:32:44.477085  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:44.477386  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:44.976767  941476 type.go:168] "Request Body" body=""
	I1213 10:32:44.976848  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:44.977176  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:45.475924  941476 type.go:168] "Request Body" body=""
	I1213 10:32:45.476036  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:45.476370  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:45.975913  941476 type.go:168] "Request Body" body=""
	I1213 10:32:45.975984  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:45.976317  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:45.976387  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:46.476025  941476 type.go:168] "Request Body" body=""
	I1213 10:32:46.476110  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:46.476450  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:46.975972  941476 type.go:168] "Request Body" body=""
	I1213 10:32:46.976040  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:46.976351  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:47.476004  941476 type.go:168] "Request Body" body=""
	I1213 10:32:47.476136  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:47.476459  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:47.976017  941476 type.go:168] "Request Body" body=""
	I1213 10:32:47.976089  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:47.976421  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:47.976477  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:48.476128  941476 type.go:168] "Request Body" body=""
	I1213 10:32:48.476203  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:48.476459  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:48.976015  941476 type.go:168] "Request Body" body=""
	I1213 10:32:48.976089  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:48.976419  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:49.476032  941476 type.go:168] "Request Body" body=""
	I1213 10:32:49.476106  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:49.476423  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:49.975990  941476 type.go:168] "Request Body" body=""
	I1213 10:32:49.976065  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:49.976312  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:50.476026  941476 type.go:168] "Request Body" body=""
	I1213 10:32:50.476104  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:50.476430  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:50.476486  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:50.976049  941476 type.go:168] "Request Body" body=""
	I1213 10:32:50.976131  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:50.976481  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:51.476188  941476 type.go:168] "Request Body" body=""
	I1213 10:32:51.476259  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:51.476529  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:51.976428  941476 type.go:168] "Request Body" body=""
	I1213 10:32:51.976507  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:51.976844  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:52.476643  941476 type.go:168] "Request Body" body=""
	I1213 10:32:52.476721  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:52.477067  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:52.477124  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:52.976867  941476 type.go:168] "Request Body" body=""
	I1213 10:32:52.976936  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:52.977207  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:53.475946  941476 type.go:168] "Request Body" body=""
	I1213 10:32:53.476027  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:53.476328  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:53.975930  941476 type.go:168] "Request Body" body=""
	I1213 10:32:53.976034  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:53.976391  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:54.475960  941476 type.go:168] "Request Body" body=""
	I1213 10:32:54.476035  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:54.476297  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:54.975999  941476 type.go:168] "Request Body" body=""
	I1213 10:32:54.976070  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:54.976357  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:54.976407  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:55.476011  941476 type.go:168] "Request Body" body=""
	I1213 10:32:55.476101  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:55.476377  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:55.976254  941476 type.go:168] "Request Body" body=""
	I1213 10:32:55.976330  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:55.976613  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:56.476032  941476 type.go:168] "Request Body" body=""
	I1213 10:32:56.476109  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:56.476457  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:56.976037  941476 type.go:168] "Request Body" body=""
	I1213 10:32:56.976111  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:56.976434  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:56.976489  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:57.475980  941476 type.go:168] "Request Body" body=""
	I1213 10:32:57.476061  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:57.476382  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:57.976008  941476 type.go:168] "Request Body" body=""
	I1213 10:32:57.976084  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:57.976417  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:58.476035  941476 type.go:168] "Request Body" body=""
	I1213 10:32:58.476116  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:58.476441  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:58.975991  941476 type.go:168] "Request Body" body=""
	I1213 10:32:58.976067  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:58.976351  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:59.476097  941476 type.go:168] "Request Body" body=""
	I1213 10:32:59.476175  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:59.476508  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:59.476569  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:59.976006  941476 type.go:168] "Request Body" body=""
	I1213 10:32:59.976086  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:59.976416  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:00.476102  941476 type.go:168] "Request Body" body=""
	I1213 10:33:00.476181  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:00.476460  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:00.976047  941476 type.go:168] "Request Body" body=""
	I1213 10:33:00.976134  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:00.976487  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:01.476029  941476 type.go:168] "Request Body" body=""
	I1213 10:33:01.476105  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:01.476429  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:01.975971  941476 type.go:168] "Request Body" body=""
	I1213 10:33:01.976042  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:01.976355  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:01.976407  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:02.476023  941476 type.go:168] "Request Body" body=""
	I1213 10:33:02.476094  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:02.476438  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:02.976170  941476 type.go:168] "Request Body" body=""
	I1213 10:33:02.976252  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:02.976630  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:03.476323  941476 type.go:168] "Request Body" body=""
	I1213 10:33:03.476399  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:03.476657  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:03.976052  941476 type.go:168] "Request Body" body=""
	I1213 10:33:03.976134  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:03.976463  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:03.976518  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:04.476187  941476 type.go:168] "Request Body" body=""
	I1213 10:33:04.476262  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:04.476613  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:04.976299  941476 type.go:168] "Request Body" body=""
	I1213 10:33:04.976377  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:04.976641  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:05.476304  941476 type.go:168] "Request Body" body=""
	I1213 10:33:05.476380  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:05.476711  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:05.976815  941476 type.go:168] "Request Body" body=""
	I1213 10:33:05.976895  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:05.977239  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:05.977294  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:06.475975  941476 type.go:168] "Request Body" body=""
	I1213 10:33:06.476047  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:06.476308  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:06.976045  941476 type.go:168] "Request Body" body=""
	I1213 10:33:06.976148  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:06.976516  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:07.476071  941476 type.go:168] "Request Body" body=""
	I1213 10:33:07.476148  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:07.476544  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:07.976078  941476 type.go:168] "Request Body" body=""
	I1213 10:33:07.976149  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:07.976402  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:08.476027  941476 type.go:168] "Request Body" body=""
	I1213 10:33:08.476099  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:08.476433  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:08.476487  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:08.976023  941476 type.go:168] "Request Body" body=""
	I1213 10:33:08.976112  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:08.976462  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:09.476176  941476 type.go:168] "Request Body" body=""
	I1213 10:33:09.476251  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:09.476526  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:09.976025  941476 type.go:168] "Request Body" body=""
	I1213 10:33:09.976104  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:09.976463  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:10.476184  941476 type.go:168] "Request Body" body=""
	I1213 10:33:10.476271  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:10.476609  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:10.476665  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:10.975992  941476 type.go:168] "Request Body" body=""
	I1213 10:33:10.976076  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:10.976358  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:11.476054  941476 type.go:168] "Request Body" body=""
	I1213 10:33:11.476129  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:11.476473  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:11.976030  941476 type.go:168] "Request Body" body=""
	I1213 10:33:11.976106  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:11.976465  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:12.476140  941476 type.go:168] "Request Body" body=""
	I1213 10:33:12.476209  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:12.476469  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:12.976013  941476 type.go:168] "Request Body" body=""
	I1213 10:33:12.976099  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:12.976394  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:12.976444  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:13.476111  941476 type.go:168] "Request Body" body=""
	I1213 10:33:13.476187  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:13.476533  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:13.976215  941476 type.go:168] "Request Body" body=""
	I1213 10:33:13.976284  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:13.976554  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:14.476030  941476 type.go:168] "Request Body" body=""
	I1213 10:33:14.476112  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:14.476450  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:14.976164  941476 type.go:168] "Request Body" body=""
	I1213 10:33:14.976241  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:14.976581  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:14.976644  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:15.475977  941476 type.go:168] "Request Body" body=""
	I1213 10:33:15.476046  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:15.476298  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:15.975945  941476 type.go:168] "Request Body" body=""
	I1213 10:33:15.976032  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:15.976414  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:16.476144  941476 type.go:168] "Request Body" body=""
	I1213 10:33:16.476219  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:16.476559  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:16.976466  941476 type.go:168] "Request Body" body=""
	I1213 10:33:16.976541  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:16.976809  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:16.976860  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:17.476687  941476 type.go:168] "Request Body" body=""
	I1213 10:33:17.476761  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:17.477087  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:17.976932  941476 type.go:168] "Request Body" body=""
	I1213 10:33:17.977005  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:17.977321  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:18.476000  941476 type.go:168] "Request Body" body=""
	I1213 10:33:18.476076  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:18.476392  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:18.976021  941476 type.go:168] "Request Body" body=""
	I1213 10:33:18.976114  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:18.976472  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:19.476006  941476 type.go:168] "Request Body" body=""
	I1213 10:33:19.476090  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:19.476437  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:19.476492  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:19.975984  941476 type.go:168] "Request Body" body=""
	I1213 10:33:19.976061  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:19.976331  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:20.476028  941476 type.go:168] "Request Body" body=""
	I1213 10:33:20.476114  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:20.476446  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:20.976140  941476 type.go:168] "Request Body" body=""
	I1213 10:33:20.976215  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:20.976570  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:21.476259  941476 type.go:168] "Request Body" body=""
	I1213 10:33:21.476335  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:21.476598  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:21.476641  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:21.976641  941476 type.go:168] "Request Body" body=""
	I1213 10:33:21.976721  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:21.977055  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:22.476842  941476 type.go:168] "Request Body" body=""
	I1213 10:33:22.476921  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:22.477263  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:22.975958  941476 type.go:168] "Request Body" body=""
	I1213 10:33:22.976026  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:22.976279  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:23.476028  941476 type.go:168] "Request Body" body=""
	I1213 10:33:23.476115  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:23.476440  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:23.976154  941476 type.go:168] "Request Body" body=""
	I1213 10:33:23.976230  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:23.976599  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:23.976655  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:24.476302  941476 type.go:168] "Request Body" body=""
	I1213 10:33:24.476382  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:24.476643  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:24.976013  941476 type.go:168] "Request Body" body=""
	I1213 10:33:24.976088  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:24.976409  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:25.476125  941476 type.go:168] "Request Body" body=""
	I1213 10:33:25.476201  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:25.476538  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:25.976508  941476 type.go:168] "Request Body" body=""
	I1213 10:33:25.976580  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:25.976838  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:25.976879  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:26.476580  941476 type.go:168] "Request Body" body=""
	I1213 10:33:26.476662  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:26.476989  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:26.975922  941476 type.go:168] "Request Body" body=""
	I1213 10:33:26.976010  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:26.976354  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:27.476098  941476 type.go:168] "Request Body" body=""
	I1213 10:33:27.476184  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:27.476458  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:27.976019  941476 type.go:168] "Request Body" body=""
	I1213 10:33:27.976107  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:27.976466  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:28.476177  941476 type.go:168] "Request Body" body=""
	I1213 10:33:28.476256  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:28.476603  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:28.476659  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:28.975989  941476 type.go:168] "Request Body" body=""
	I1213 10:33:28.976063  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:28.976324  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:29.475991  941476 type.go:168] "Request Body" body=""
	I1213 10:33:29.476066  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:29.476404  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:29.975996  941476 type.go:168] "Request Body" body=""
	I1213 10:33:29.976077  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:29.976425  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:30.476099  941476 type.go:168] "Request Body" body=""
	I1213 10:33:30.476165  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:30.476425  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:30.976057  941476 type.go:168] "Request Body" body=""
	I1213 10:33:30.976139  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:30.976428  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:30.976479  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:31.476028  941476 type.go:168] "Request Body" body=""
	I1213 10:33:31.476102  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:31.476433  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:31.975996  941476 type.go:168] "Request Body" body=""
	I1213 10:33:31.976062  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:31.976317  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:32.476036  941476 type.go:168] "Request Body" body=""
	I1213 10:33:32.476123  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:32.476463  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:32.976156  941476 type.go:168] "Request Body" body=""
	I1213 10:33:32.976239  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:32.976579  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:32.976636  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:33.476103  941476 type.go:168] "Request Body" body=""
	I1213 10:33:33.476175  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:33.476433  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:33.976025  941476 type.go:168] "Request Body" body=""
	I1213 10:33:33.976098  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:33.976421  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:34.476116  941476 type.go:168] "Request Body" body=""
	I1213 10:33:34.476189  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:34.476493  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:34.976055  941476 type.go:168] "Request Body" body=""
	I1213 10:33:34.976123  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:34.976382  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:35.476023  941476 type.go:168] "Request Body" body=""
	I1213 10:33:35.476099  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:35.476443  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:35.476499  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:35.975939  941476 type.go:168] "Request Body" body=""
	I1213 10:33:35.976014  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:35.976367  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:36.475982  941476 type.go:168] "Request Body" body=""
	I1213 10:33:36.476086  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:36.476409  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:36.976046  941476 type.go:168] "Request Body" body=""
	I1213 10:33:36.976117  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:36.976443  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:37.476164  941476 type.go:168] "Request Body" body=""
	I1213 10:33:37.476242  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:37.476524  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:37.476575  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:37.976198  941476 type.go:168] "Request Body" body=""
	I1213 10:33:37.976275  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:37.976533  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:38.476039  941476 type.go:168] "Request Body" body=""
	I1213 10:33:38.476112  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:38.476422  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:38.976114  941476 type.go:168] "Request Body" body=""
	I1213 10:33:38.976199  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:38.976530  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:39.476088  941476 type.go:168] "Request Body" body=""
	I1213 10:33:39.476161  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:39.476422  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:39.976009  941476 type.go:168] "Request Body" body=""
	I1213 10:33:39.976084  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:39.976397  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:39.976449  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:40.476000  941476 type.go:168] "Request Body" body=""
	I1213 10:33:40.476076  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:40.476414  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:40.976095  941476 type.go:168] "Request Body" body=""
	I1213 10:33:40.976167  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:40.976436  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:41.476022  941476 type.go:168] "Request Body" body=""
	I1213 10:33:41.476094  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:41.476397  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:41.976013  941476 type.go:168] "Request Body" body=""
	I1213 10:33:41.976092  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:41.976658  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:41.976706  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:42.475980  941476 type.go:168] "Request Body" body=""
	I1213 10:33:42.476055  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:42.476675  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:42.976377  941476 type.go:168] "Request Body" body=""
	I1213 10:33:42.976455  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:42.976815  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:43.476621  941476 type.go:168] "Request Body" body=""
	I1213 10:33:43.476701  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:43.477037  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:43.976823  941476 type.go:168] "Request Body" body=""
	I1213 10:33:43.976888  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:43.977141  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:43.977181  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:44.476934  941476 type.go:168] "Request Body" body=""
	I1213 10:33:44.477006  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:44.477335  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:44.976009  941476 type.go:168] "Request Body" body=""
	I1213 10:33:44.976100  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:44.976470  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:45.476027  941476 type.go:168] "Request Body" body=""
	I1213 10:33:45.476103  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:45.476385  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:45.976244  941476 type.go:168] "Request Body" body=""
	I1213 10:33:45.976320  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:45.976638  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:46.476051  941476 type.go:168] "Request Body" body=""
	I1213 10:33:46.476134  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:46.476479  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:46.476535  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:46.975987  941476 type.go:168] "Request Body" body=""
	I1213 10:33:46.976061  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:46.976313  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:47.476031  941476 type.go:168] "Request Body" body=""
	I1213 10:33:47.476113  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:47.476457  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:47.976041  941476 type.go:168] "Request Body" body=""
	I1213 10:33:47.976125  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:47.976473  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:48.476166  941476 type.go:168] "Request Body" body=""
	I1213 10:33:48.476241  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:48.476522  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:48.476583  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:48.975991  941476 type.go:168] "Request Body" body=""
	I1213 10:33:48.976075  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:48.976407  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:49.476115  941476 type.go:168] "Request Body" body=""
	I1213 10:33:49.476190  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:49.476513  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:49.975984  941476 type.go:168] "Request Body" body=""
	I1213 10:33:49.976052  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:49.976304  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:50.476021  941476 type.go:168] "Request Body" body=""
	I1213 10:33:50.476105  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:50.476430  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:50.976125  941476 type.go:168] "Request Body" body=""
	I1213 10:33:50.976206  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:50.976556  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:50.976613  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:51.476129  941476 type.go:168] "Request Body" body=""
	I1213 10:33:51.476201  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:51.476471  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:51.976229  941476 type.go:168] "Request Body" body=""
	I1213 10:33:51.976307  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:51.976619  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:52.476357  941476 type.go:168] "Request Body" body=""
	I1213 10:33:52.476455  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:52.476789  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:52.976543  941476 type.go:168] "Request Body" body=""
	I1213 10:33:52.976619  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:52.976876  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:52.976919  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:53.476697  941476 type.go:168] "Request Body" body=""
	I1213 10:33:53.476776  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:53.477117  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:53.976881  941476 type.go:168] "Request Body" body=""
	I1213 10:33:53.976953  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:53.977282  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:54.476946  941476 type.go:168] "Request Body" body=""
	I1213 10:33:54.477041  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:54.477322  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:54.976021  941476 type.go:168] "Request Body" body=""
	I1213 10:33:54.976100  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:54.976426  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:55.476042  941476 type.go:168] "Request Body" body=""
	I1213 10:33:55.476124  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:55.476467  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:55.476548  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:55.976476  941476 type.go:168] "Request Body" body=""
	I1213 10:33:55.976544  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:55.976834  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:56.476663  941476 type.go:168] "Request Body" body=""
	I1213 10:33:56.476741  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:56.477071  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:56.975949  941476 type.go:168] "Request Body" body=""
	I1213 10:33:56.976040  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:56.976420  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:57.475988  941476 type.go:168] "Request Body" body=""
	I1213 10:33:57.476057  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:57.476315  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:57.976051  941476 type.go:168] "Request Body" body=""
	I1213 10:33:57.976129  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:57.976419  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:57.976467  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:58.476120  941476 type.go:168] "Request Body" body=""
	I1213 10:33:58.476204  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:58.476550  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:58.976099  941476 type.go:168] "Request Body" body=""
	I1213 10:33:58.976165  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:58.976418  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:59.476030  941476 type.go:168] "Request Body" body=""
	I1213 10:33:59.476110  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:59.476450  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:59.976134  941476 type.go:168] "Request Body" body=""
	I1213 10:33:59.976218  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:59.976654  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:59.976717  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:00.476365  941476 type.go:168] "Request Body" body=""
	I1213 10:34:00.476441  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:00.476723  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:00.976554  941476 type.go:168] "Request Body" body=""
	I1213 10:34:00.976626  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:00.976899  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:01.476692  941476 type.go:168] "Request Body" body=""
	I1213 10:34:01.476765  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:01.477095  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:01.976838  941476 type.go:168] "Request Body" body=""
	I1213 10:34:01.976916  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:01.977190  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:01.977235  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:02.475885  941476 type.go:168] "Request Body" body=""
	I1213 10:34:02.475972  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:02.476308  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:02.976032  941476 type.go:168] "Request Body" body=""
	I1213 10:34:02.976106  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:02.976439  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:03.476117  941476 type.go:168] "Request Body" body=""
	I1213 10:34:03.476185  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:03.476511  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:03.976078  941476 type.go:168] "Request Body" body=""
	I1213 10:34:03.976164  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:03.976510  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:04.476128  941476 type.go:168] "Request Body" body=""
	I1213 10:34:04.476208  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:04.476533  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:04.476591  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:04.975971  941476 type.go:168] "Request Body" body=""
	I1213 10:34:04.976047  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:04.976363  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:05.476003  941476 type.go:168] "Request Body" body=""
	I1213 10:34:05.476075  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:05.476405  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:05.976170  941476 type.go:168] "Request Body" body=""
	I1213 10:34:05.976243  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:05.976545  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:06.476105  941476 type.go:168] "Request Body" body=""
	I1213 10:34:06.476180  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:06.476517  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:06.976527  941476 type.go:168] "Request Body" body=""
	I1213 10:34:06.976615  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:06.976986  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:06.977056  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:07.476804  941476 type.go:168] "Request Body" body=""
	I1213 10:34:07.476894  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:07.477246  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:07.975926  941476 type.go:168] "Request Body" body=""
	I1213 10:34:07.975997  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:07.976254  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:08.476006  941476 type.go:168] "Request Body" body=""
	I1213 10:34:08.476102  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:08.476453  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:08.976173  941476 type.go:168] "Request Body" body=""
	I1213 10:34:08.976254  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:08.976538  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:09.476199  941476 type.go:168] "Request Body" body=""
	I1213 10:34:09.476277  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:09.476604  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:09.476655  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:09.976021  941476 type.go:168] "Request Body" body=""
	I1213 10:34:09.976097  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:09.976401  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:10.476158  941476 type.go:168] "Request Body" body=""
	I1213 10:34:10.476243  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:10.476583  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:10.975974  941476 type.go:168] "Request Body" body=""
	I1213 10:34:10.976050  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:10.976361  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:11.476040  941476 type.go:168] "Request Body" body=""
	I1213 10:34:11.476133  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:11.476485  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:11.976462  941476 type.go:168] "Request Body" body=""
	I1213 10:34:11.976535  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:11.976826  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:11.976874  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:12.476533  941476 type.go:168] "Request Body" body=""
	I1213 10:34:12.476615  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:12.476871  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:12.976701  941476 type.go:168] "Request Body" body=""
	I1213 10:34:12.976782  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:12.977103  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:13.476950  941476 type.go:168] "Request Body" body=""
	I1213 10:34:13.477040  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:13.477394  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:13.976070  941476 type.go:168] "Request Body" body=""
	I1213 10:34:13.976154  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:13.976430  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:14.476035  941476 type.go:168] "Request Body" body=""
	I1213 10:34:14.476112  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:14.476448  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:14.476506  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:14.976192  941476 type.go:168] "Request Body" body=""
	I1213 10:34:14.976290  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:14.976612  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:15.475988  941476 type.go:168] "Request Body" body=""
	I1213 10:34:15.476090  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:15.476371  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:15.975973  941476 type.go:168] "Request Body" body=""
	I1213 10:34:15.976053  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:15.976336  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:16.476054  941476 type.go:168] "Request Body" body=""
	I1213 10:34:16.476134  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:16.476457  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:16.976232  941476 type.go:168] "Request Body" body=""
	I1213 10:34:16.976307  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:16.976573  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:16.976615  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:17.476042  941476 type.go:168] "Request Body" body=""
	I1213 10:34:17.476118  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:17.476467  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:17.976182  941476 type.go:168] "Request Body" body=""
	I1213 10:34:17.976258  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:17.976609  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:18.476297  941476 type.go:168] "Request Body" body=""
	I1213 10:34:18.476413  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:18.476678  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:18.976046  941476 type.go:168] "Request Body" body=""
	I1213 10:34:18.976123  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:18.976446  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:19.476033  941476 type.go:168] "Request Body" body=""
	I1213 10:34:19.476117  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:19.476440  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:19.476499  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:19.976045  941476 type.go:168] "Request Body" body=""
	I1213 10:34:19.976129  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:19.976535  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:20.476021  941476 type.go:168] "Request Body" body=""
	I1213 10:34:20.476097  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:20.476428  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:20.976025  941476 type.go:168] "Request Body" body=""
	I1213 10:34:20.976107  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:20.976470  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:21.476149  941476 type.go:168] "Request Body" body=""
	I1213 10:34:21.476232  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:21.476535  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:21.476579  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:21.976488  941476 type.go:168] "Request Body" body=""
	I1213 10:34:21.976565  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:21.976917  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:22.476734  941476 type.go:168] "Request Body" body=""
	I1213 10:34:22.476814  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:22.477160  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:22.976929  941476 type.go:168] "Request Body" body=""
	I1213 10:34:22.977004  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:22.977264  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:23.475962  941476 type.go:168] "Request Body" body=""
	I1213 10:34:23.476043  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:23.476394  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:23.975992  941476 type.go:168] "Request Body" body=""
	I1213 10:34:23.976073  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:23.976409  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:23.976471  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:24.475994  941476 type.go:168] "Request Body" body=""
	I1213 10:34:24.476067  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:24.476343  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:24.976030  941476 type.go:168] "Request Body" body=""
	I1213 10:34:24.976113  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:24.976425  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:25.476120  941476 type.go:168] "Request Body" body=""
	I1213 10:34:25.476209  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:25.476597  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:25.976332  941476 type.go:168] "Request Body" body=""
	I1213 10:34:25.976407  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:25.976654  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:25.976698  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:26.476026  941476 type.go:168] "Request Body" body=""
	I1213 10:34:26.476112  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:26.476445  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:26.976015  941476 type.go:168] "Request Body" body=""
	I1213 10:34:26.976105  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:26.976489  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:27.476210  941476 type.go:168] "Request Body" body=""
	I1213 10:34:27.476283  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:27.476557  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:27.976225  941476 type.go:168] "Request Body" body=""
	I1213 10:34:27.976306  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:27.976615  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:28.476015  941476 type.go:168] "Request Body" body=""
	I1213 10:34:28.476091  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:28.476427  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:28.476486  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:28.976015  941476 type.go:168] "Request Body" body=""
	I1213 10:34:28.976082  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:28.976344  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:29.476028  941476 type.go:168] "Request Body" body=""
	I1213 10:34:29.476111  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:29.476510  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:29.976204  941476 type.go:168] "Request Body" body=""
	I1213 10:34:29.976284  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:29.976620  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:30.476184  941476 type.go:168] "Request Body" body=""
	I1213 10:34:30.476253  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:30.476523  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:30.476567  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:30.976028  941476 type.go:168] "Request Body" body=""
	I1213 10:34:30.976107  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:30.976466  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:31.476057  941476 type.go:168] "Request Body" body=""
	I1213 10:34:31.476134  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:31.476442  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:31.976251  941476 type.go:168] "Request Body" body=""
	I1213 10:34:31.976330  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:31.976592  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:32.476291  941476 type.go:168] "Request Body" body=""
	I1213 10:34:32.476378  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:32.476726  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:32.476797  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:32.976040  941476 type.go:168] "Request Body" body=""
	I1213 10:34:32.976118  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:32.976497  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:33.476816  941476 type.go:168] "Request Body" body=""
	I1213 10:34:33.476896  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:33.477256  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:33.975966  941476 type.go:168] "Request Body" body=""
	I1213 10:34:33.976050  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:33.976402  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:34.476115  941476 type.go:168] "Request Body" body=""
	I1213 10:34:34.476192  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:34.476542  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:34.976227  941476 type.go:168] "Request Body" body=""
	I1213 10:34:34.976305  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:34.976571  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:34.976613  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:35.476273  941476 type.go:168] "Request Body" body=""
	I1213 10:34:35.476350  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:35.476744  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:35.976574  941476 type.go:168] "Request Body" body=""
	I1213 10:34:35.976660  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:35.976987  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:36.476793  941476 type.go:168] "Request Body" body=""
	I1213 10:34:36.476879  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:36.477161  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:36.976010  941476 type.go:168] "Request Body" body=""
	I1213 10:34:36.976112  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:36.976494  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:37.476223  941476 type.go:168] "Request Body" body=""
	I1213 10:34:37.476305  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:37.476698  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:37.476756  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:37.976397  941476 type.go:168] "Request Body" body=""
	I1213 10:34:37.976468  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:37.976743  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:38.476018  941476 type.go:168] "Request Body" body=""
	I1213 10:34:38.476101  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:38.476460  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:38.976171  941476 type.go:168] "Request Body" body=""
	I1213 10:34:38.976253  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:38.976575  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:39.475986  941476 type.go:168] "Request Body" body=""
	I1213 10:34:39.476060  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:39.476387  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:39.976027  941476 type.go:168] "Request Body" body=""
	I1213 10:34:39.976100  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:39.976461  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:39.976525  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:40.476055  941476 type.go:168] "Request Body" body=""
	I1213 10:34:40.476137  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:40.476513  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:40.975990  941476 type.go:168] "Request Body" body=""
	I1213 10:34:40.976061  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:40.976330  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:41.476024  941476 type.go:168] "Request Body" body=""
	I1213 10:34:41.476103  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:41.476438  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:41.976017  941476 type.go:168] "Request Body" body=""
	I1213 10:34:41.976103  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:41.976445  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:42.476145  941476 type.go:168] "Request Body" body=""
	I1213 10:34:42.476214  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:42.476486  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:42.476531  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:42.976037  941476 type.go:168] "Request Body" body=""
	I1213 10:34:42.976110  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:42.976445  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:43.476157  941476 type.go:168] "Request Body" body=""
	I1213 10:34:43.476237  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:43.476565  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:43.975999  941476 type.go:168] "Request Body" body=""
	I1213 10:34:43.976386  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:43.976856  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:44.476021  941476 type.go:168] "Request Body" body=""
	I1213 10:34:44.476103  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:44.476481  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:44.476557  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:44.976279  941476 type.go:168] "Request Body" body=""
	I1213 10:34:44.976368  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:44.976729  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:45.476416  941476 type.go:168] "Request Body" body=""
	I1213 10:34:45.476491  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:45.476765  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:45.976781  941476 type.go:168] "Request Body" body=""
	I1213 10:34:45.976855  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:45.977208  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:46.475938  941476 type.go:168] "Request Body" body=""
	I1213 10:34:46.476018  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:46.476395  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:46.976009  941476 type.go:168] "Request Body" body=""
	I1213 10:34:46.976076  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:46.976328  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:46.976368  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:47.475981  941476 type.go:168] "Request Body" body=""
	I1213 10:34:47.476053  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:47.476668  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:47.976228  941476 type.go:168] "Request Body" body=""
	I1213 10:34:47.976301  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:47.976633  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:48.476320  941476 type.go:168] "Request Body" body=""
	I1213 10:34:48.476388  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:48.476671  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:48.976029  941476 type.go:168] "Request Body" body=""
	I1213 10:34:48.976104  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:48.976426  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:48.976483  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:49.476176  941476 type.go:168] "Request Body" body=""
	I1213 10:34:49.476256  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:49.476626  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:49.976322  941476 type.go:168] "Request Body" body=""
	I1213 10:34:49.976392  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:49.976656  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:50.476004  941476 type.go:168] "Request Body" body=""
	I1213 10:34:50.476079  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:50.476438  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:50.976173  941476 type.go:168] "Request Body" body=""
	I1213 10:34:50.976252  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:50.976562  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:50.976609  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:51.475985  941476 type.go:168] "Request Body" body=""
	I1213 10:34:51.476058  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:51.476380  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:51.976214  941476 type.go:168] "Request Body" body=""
	I1213 10:34:51.976292  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:51.976634  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:52.476361  941476 type.go:168] "Request Body" body=""
	I1213 10:34:52.476438  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:52.476777  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:52.976550  941476 type.go:168] "Request Body" body=""
	I1213 10:34:52.976620  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:52.976884  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:52.976928  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:53.476704  941476 type.go:168] "Request Body" body=""
	I1213 10:34:53.476789  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:53.477137  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:53.976929  941476 type.go:168] "Request Body" body=""
	I1213 10:34:53.977004  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:53.977333  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:54.476034  941476 type.go:168] "Request Body" body=""
	I1213 10:34:54.476106  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:54.476377  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:54.976023  941476 type.go:168] "Request Body" body=""
	I1213 10:34:54.976105  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:54.976454  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:55.476046  941476 type.go:168] "Request Body" body=""
	I1213 10:34:55.476127  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:55.476479  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:55.476535  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:55.976212  941476 type.go:168] "Request Body" body=""
	I1213 10:34:55.976283  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:55.976540  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:56.476033  941476 type.go:168] "Request Body" body=""
	I1213 10:34:56.476112  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:56.476472  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:56.976530  941476 type.go:168] "Request Body" body=""
	I1213 10:34:56.976612  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:56.977004  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:57.476807  941476 type.go:168] "Request Body" body=""
	I1213 10:34:57.476890  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:57.477154  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:57.477196  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:57.977018  941476 type.go:168] "Request Body" body=""
	I1213 10:34:57.977109  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:57.977446  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:58.476146  941476 type.go:168] "Request Body" body=""
	I1213 10:34:58.476225  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:58.476550  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:58.976270  941476 type.go:168] "Request Body" body=""
	I1213 10:34:58.976346  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:58.976611  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:59.476051  941476 type.go:168] "Request Body" body=""
	I1213 10:34:59.476143  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:59.476548  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:59.976128  941476 type.go:168] "Request Body" body=""
	I1213 10:34:59.976213  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:59.976516  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:59.976563  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:00.476032  941476 type.go:168] "Request Body" body=""
	I1213 10:35:00.476107  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:00.476542  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:00.976317  941476 type.go:168] "Request Body" body=""
	I1213 10:35:00.976411  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:00.976761  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:01.476609  941476 type.go:168] "Request Body" body=""
	I1213 10:35:01.476689  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:01.477045  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:01.976793  941476 type.go:168] "Request Body" body=""
	I1213 10:35:01.976872  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:01.977145  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:01.977189  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:02.476982  941476 type.go:168] "Request Body" body=""
	I1213 10:35:02.477061  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:02.477408  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:02.976099  941476 type.go:168] "Request Body" body=""
	I1213 10:35:02.976178  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:02.976550  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:03.476237  941476 type.go:168] "Request Body" body=""
	I1213 10:35:03.476319  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:03.476595  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:03.976292  941476 type.go:168] "Request Body" body=""
	I1213 10:35:03.976381  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:03.976725  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:04.476528  941476 type.go:168] "Request Body" body=""
	I1213 10:35:04.476603  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:04.476926  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:04.476983  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:04.976700  941476 type.go:168] "Request Body" body=""
	I1213 10:35:04.976771  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:04.977027  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:05.476835  941476 type.go:168] "Request Body" body=""
	I1213 10:35:05.476914  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:05.477258  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:05.976201  941476 type.go:168] "Request Body" body=""
	I1213 10:35:05.976279  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:05.976630  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:06.476362  941476 type.go:168] "Request Body" body=""
	I1213 10:35:06.476440  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:06.476705  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:06.976610  941476 type.go:168] "Request Body" body=""
	I1213 10:35:06.976688  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:06.977052  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:06.977113  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:07.476898  941476 type.go:168] "Request Body" body=""
	I1213 10:35:07.476978  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:07.477359  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:07.975992  941476 type.go:168] "Request Body" body=""
	I1213 10:35:07.976075  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:07.976399  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:08.476093  941476 type.go:168] "Request Body" body=""
	I1213 10:35:08.476179  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:08.476527  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:08.976239  941476 type.go:168] "Request Body" body=""
	I1213 10:35:08.976318  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:08.976631  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:09.475998  941476 type.go:168] "Request Body" body=""
	I1213 10:35:09.476070  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:09.476334  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:09.476377  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:09.976025  941476 type.go:168] "Request Body" body=""
	I1213 10:35:09.976103  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:09.976446  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:10.476153  941476 type.go:168] "Request Body" body=""
	I1213 10:35:10.476230  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:10.476565  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:10.976284  941476 type.go:168] "Request Body" body=""
	I1213 10:35:10.976359  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:10.976641  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:11.476331  941476 type.go:168] "Request Body" body=""
	I1213 10:35:11.476408  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:11.476754  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:11.476819  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:11.976620  941476 type.go:168] "Request Body" body=""
	I1213 10:35:11.976709  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:11.977042  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:12.476813  941476 type.go:168] "Request Body" body=""
	I1213 10:35:12.476885  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:12.477142  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:12.976929  941476 type.go:168] "Request Body" body=""
	I1213 10:35:12.977022  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:12.977398  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:13.476001  941476 type.go:168] "Request Body" body=""
	I1213 10:35:13.476080  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:13.476431  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:13.976122  941476 type.go:168] "Request Body" body=""
	I1213 10:35:13.976192  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:13.976457  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:13.976500  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:14.475989  941476 type.go:168] "Request Body" body=""
	I1213 10:35:14.476065  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:14.476409  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:14.976135  941476 type.go:168] "Request Body" body=""
	I1213 10:35:14.976241  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:14.976610  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:15.476299  941476 type.go:168] "Request Body" body=""
	I1213 10:35:15.476374  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:15.476636  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:15.976597  941476 type.go:168] "Request Body" body=""
	I1213 10:35:15.976678  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:15.977009  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:15.977062  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:16.476828  941476 type.go:168] "Request Body" body=""
	I1213 10:35:16.476909  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:16.477284  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:16.975983  941476 type.go:168] "Request Body" body=""
	I1213 10:35:16.976057  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:16.976412  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:17.476005  941476 type.go:168] "Request Body" body=""
	I1213 10:35:17.476082  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:17.476426  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:17.976147  941476 type.go:168] "Request Body" body=""
	I1213 10:35:17.976234  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:17.976566  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:18.476096  941476 type.go:168] "Request Body" body=""
	I1213 10:35:18.476172  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:18.476450  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:18.476495  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:18.976034  941476 type.go:168] "Request Body" body=""
	I1213 10:35:18.976113  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:18.976435  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:19.476137  941476 type.go:168] "Request Body" body=""
	I1213 10:35:19.476227  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:19.476564  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:19.976248  941476 type.go:168] "Request Body" body=""
	I1213 10:35:19.976327  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:19.976600  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:20.476028  941476 type.go:168] "Request Body" body=""
	I1213 10:35:20.476115  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:20.476474  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:20.476531  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:20.976196  941476 type.go:168] "Request Body" body=""
	I1213 10:35:20.976277  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:20.976613  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:21.476306  941476 type.go:168] "Request Body" body=""
	I1213 10:35:21.476385  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:21.476650  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:21.976568  941476 type.go:168] "Request Body" body=""
	I1213 10:35:21.976645  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:21.976977  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:22.476793  941476 type.go:168] "Request Body" body=""
	I1213 10:35:22.476870  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:22.477217  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:22.477279  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:22.975966  941476 type.go:168] "Request Body" body=""
	I1213 10:35:22.976040  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:22.976311  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:23.476036  941476 type.go:168] "Request Body" body=""
	I1213 10:35:23.476125  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:23.476480  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:23.976070  941476 type.go:168] "Request Body" body=""
	I1213 10:35:23.976153  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:23.976505  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:24.476197  941476 type.go:168] "Request Body" body=""
	I1213 10:35:24.476265  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:24.476534  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:24.976215  941476 type.go:168] "Request Body" body=""
	I1213 10:35:24.976288  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:24.976630  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:24.976686  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:25.476352  941476 type.go:168] "Request Body" body=""
	I1213 10:35:25.476428  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:25.476773  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:25.976631  941476 type.go:168] "Request Body" body=""
	I1213 10:35:25.976701  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:25.976974  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:26.476849  941476 type.go:168] "Request Body" body=""
	I1213 10:35:26.476924  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:26.477262  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:26.976050  941476 type.go:168] "Request Body" body=""
	I1213 10:35:26.976131  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:26.976463  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:27.475977  941476 type.go:168] "Request Body" body=""
	I1213 10:35:27.476053  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:27.476355  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:27.476414  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:27.975991  941476 type.go:168] "Request Body" body=""
	I1213 10:35:27.976070  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:27.976388  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:28.476128  941476 type.go:168] "Request Body" body=""
	I1213 10:35:28.476210  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:28.476540  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:28.975989  941476 type.go:168] "Request Body" body=""
	I1213 10:35:28.976066  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:28.976327  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:29.476011  941476 type.go:168] "Request Body" body=""
	I1213 10:35:29.476091  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:29.476427  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:29.476488  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:29.976030  941476 type.go:168] "Request Body" body=""
	I1213 10:35:29.976112  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:29.976433  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:30.476132  941476 type.go:168] "Request Body" body=""
	I1213 10:35:30.476208  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:30.476494  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:30.976178  941476 type.go:168] "Request Body" body=""
	I1213 10:35:30.976260  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:30.976576  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:31.476298  941476 type.go:168] "Request Body" body=""
	I1213 10:35:31.476371  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:31.476716  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:31.476774  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:31.976573  941476 type.go:168] "Request Body" body=""
	I1213 10:35:31.976645  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:31.976917  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:32.476713  941476 type.go:168] "Request Body" body=""
	I1213 10:35:32.476790  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:32.477195  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:32.975947  941476 type.go:168] "Request Body" body=""
	I1213 10:35:32.976021  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:32.976319  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:33.476001  941476 type.go:168] "Request Body" body=""
	I1213 10:35:33.476069  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:33.476324  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:33.976034  941476 type.go:168] "Request Body" body=""
	I1213 10:35:33.976113  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:33.976453  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:33.976512  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:34.476185  941476 type.go:168] "Request Body" body=""
	I1213 10:35:34.476263  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:34.476596  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:34.975980  941476 type.go:168] "Request Body" body=""
	I1213 10:35:34.976061  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:34.976361  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:35.476014  941476 type.go:168] "Request Body" body=""
	I1213 10:35:35.476110  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:35.476451  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:35.976934  941476 type.go:168] "Request Body" body=""
	I1213 10:35:35.977011  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:35.977366  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:35.977428  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:36.476062  941476 type.go:168] "Request Body" body=""
	I1213 10:35:36.476139  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:36.476417  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:36.976261  941476 type.go:168] "Request Body" body=""
	I1213 10:35:36.976334  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:36.976678  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:37.476392  941476 type.go:168] "Request Body" body=""
	I1213 10:35:37.476480  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:37.476822  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:37.976607  941476 type.go:168] "Request Body" body=""
	I1213 10:35:37.976691  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:37.976956  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:38.476714  941476 type.go:168] "Request Body" body=""
	I1213 10:35:38.476786  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:38.477099  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:38.477160  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:38.976964  941476 type.go:168] "Request Body" body=""
	I1213 10:35:38.977048  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:38.977472  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:39.476025  941476 type.go:168] "Request Body" body=""
	I1213 10:35:39.476097  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:39.476371  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:39.976024  941476 type.go:168] "Request Body" body=""
	I1213 10:35:39.976107  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:39.976494  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:40.476166  941476 type.go:168] "Request Body" body=""
	I1213 10:35:40.476250  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:40.476607  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:40.975981  941476 type.go:168] "Request Body" body=""
	I1213 10:35:40.976060  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:40.976331  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:40.976379  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:41.476023  941476 type.go:168] "Request Body" body=""
	I1213 10:35:41.476108  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:41.476426  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:41.976057  941476 type.go:168] "Request Body" body=""
	I1213 10:35:41.976134  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:41.976442  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:42.475983  941476 type.go:168] "Request Body" body=""
	I1213 10:35:42.476061  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:42.480099  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1213 10:35:42.976928  941476 type.go:168] "Request Body" body=""
	I1213 10:35:42.977007  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:42.977373  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:42.977438  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:43.476024  941476 type.go:168] "Request Body" body=""
	I1213 10:35:43.476136  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:43.476497  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:43.976064  941476 type.go:168] "Request Body" body=""
	I1213 10:35:43.976139  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:43.976405  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:44.476012  941476 type.go:168] "Request Body" body=""
	I1213 10:35:44.476110  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:44.476457  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:44.976181  941476 type.go:168] "Request Body" body=""
	I1213 10:35:44.976260  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:44.976576  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:45.476263  941476 type.go:168] "Request Body" body=""
	I1213 10:35:45.476338  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:45.476639  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:45.476717  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:45.976693  941476 type.go:168] "Request Body" body=""
	I1213 10:35:45.976776  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:45.977113  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:46.476938  941476 type.go:168] "Request Body" body=""
	I1213 10:35:46.477014  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:46.477384  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:46.975991  941476 type.go:168] "Request Body" body=""
	I1213 10:35:46.976091  941476 node_ready.go:38] duration metric: took 6m0.000294728s for node "functional-200955" to be "Ready" ...
	I1213 10:35:46.979089  941476 out.go:203] 
	W1213 10:35:46.981875  941476 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1213 10:35:46.981899  941476 out.go:285] * 
	W1213 10:35:46.984058  941476 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 10:35:46.987297  941476 out.go:203] 
	
	
	==> CRI-O <==
	Dec 13 10:29:44 functional-200955 crio[5381]: time="2025-12-13T10:29:44.591623981Z" level=info msg="Using the internal default seccomp profile"
	Dec 13 10:29:44 functional-200955 crio[5381]: time="2025-12-13T10:29:44.59170097Z" level=info msg="AppArmor is disabled by the system or at CRI-O build-time"
	Dec 13 10:29:44 functional-200955 crio[5381]: time="2025-12-13T10:29:44.591755928Z" level=info msg="No blockio config file specified, blockio not configured"
	Dec 13 10:29:44 functional-200955 crio[5381]: time="2025-12-13T10:29:44.591808548Z" level=info msg="RDT not available in the host system"
	Dec 13 10:29:44 functional-200955 crio[5381]: time="2025-12-13T10:29:44.591883757Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 13 10:29:44 functional-200955 crio[5381]: time="2025-12-13T10:29:44.592750315Z" level=info msg="Conmon does support the --sync option"
	Dec 13 10:29:44 functional-200955 crio[5381]: time="2025-12-13T10:29:44.592870521Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 13 10:29:44 functional-200955 crio[5381]: time="2025-12-13T10:29:44.592951095Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 13 10:29:44 functional-200955 crio[5381]: time="2025-12-13T10:29:44.593821723Z" level=info msg="Conmon does support the --sync option"
	Dec 13 10:29:44 functional-200955 crio[5381]: time="2025-12-13T10:29:44.593926823Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 13 10:29:44 functional-200955 crio[5381]: time="2025-12-13T10:29:44.594167883Z" level=info msg="Updated default CNI network name to "
	Dec 13 10:29:44 functional-200955 crio[5381]: time="2025-12-13T10:29:44.595116846Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oc
i/hooks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"cgroupfs\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n
uid_mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_
memory = \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    namespaced_auth_dir = \"/etc/crio/auth\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_d
ir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [c
rio.nri]\n    enable_nri = true\n    nri_listen = \"/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Dec 13 10:29:44 functional-200955 crio[5381]: time="2025-12-13T10:29:44.595836842Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Dec 13 10:29:44 functional-200955 crio[5381]: time="2025-12-13T10:29:44.595984519Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Dec 13 10:29:44 functional-200955 crio[5381]: time="2025-12-13T10:29:44.650542947Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 13 10:29:44 functional-200955 crio[5381]: time="2025-12-13T10:29:44.650722263Z" level=info msg="Starting seccomp notifier watcher"
	Dec 13 10:29:44 functional-200955 crio[5381]: time="2025-12-13T10:29:44.65083465Z" level=info msg="Create NRI interface"
	Dec 13 10:29:44 functional-200955 crio[5381]: time="2025-12-13T10:29:44.65094615Z" level=info msg="built-in NRI default validator is disabled"
	Dec 13 10:29:44 functional-200955 crio[5381]: time="2025-12-13T10:29:44.650963733Z" level=info msg="runtime interface created"
	Dec 13 10:29:44 functional-200955 crio[5381]: time="2025-12-13T10:29:44.650979553Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 13 10:29:44 functional-200955 crio[5381]: time="2025-12-13T10:29:44.650986076Z" level=info msg="runtime interface starting up..."
	Dec 13 10:29:44 functional-200955 crio[5381]: time="2025-12-13T10:29:44.65099342Z" level=info msg="starting plugins..."
	Dec 13 10:29:44 functional-200955 crio[5381]: time="2025-12-13T10:29:44.651007959Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 13 10:29:44 functional-200955 crio[5381]: time="2025-12-13T10:29:44.651079788Z" level=info msg="No systemd watchdog enabled"
	Dec 13 10:29:44 functional-200955 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:35:48.812709    8574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:35:48.813346    8574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:35:48.814947    8574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:35:48.815438    8574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:35:48.817249    8574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec13 10:10] kauditd_printk_skb: 8 callbacks suppressed
	[Dec13 10:11] overlayfs: idmapped layers are currently not supported
	[  +0.076161] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec13 10:17] overlayfs: idmapped layers are currently not supported
	[Dec13 10:18] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 10:35:48 up  5:18,  0 user,  load average: 0.10, 0.31, 0.87
	Linux functional-200955 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 10:35:46 functional-200955 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:35:46 functional-200955 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1136.
	Dec 13 10:35:46 functional-200955 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:35:46 functional-200955 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:35:47 functional-200955 kubelet[8462]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 10:35:47 functional-200955 kubelet[8462]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 10:35:47 functional-200955 kubelet[8462]: E1213 10:35:47.044458    8462 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:35:47 functional-200955 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:35:47 functional-200955 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:35:47 functional-200955 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1137.
	Dec 13 10:35:47 functional-200955 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:35:47 functional-200955 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:35:47 functional-200955 kubelet[8469]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 10:35:47 functional-200955 kubelet[8469]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 10:35:47 functional-200955 kubelet[8469]: E1213 10:35:47.776163    8469 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:35:47 functional-200955 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:35:47 functional-200955 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:35:48 functional-200955 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1138.
	Dec 13 10:35:48 functional-200955 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:35:48 functional-200955 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:35:48 functional-200955 kubelet[8495]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 10:35:48 functional-200955 kubelet[8495]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 10:35:48 functional-200955 kubelet[8495]: E1213 10:35:48.541150    8495 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:35:48 functional-200955 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:35:48 functional-200955 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-200955 -n functional-200955
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-200955 -n functional-200955: exit status 2 (374.584417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-200955" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (368.48s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (2.52s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-200955 get po -A
functional_test.go:711: (dbg) Non-zero exit: kubectl --context functional-200955 get po -A: exit status 1 (64.792832ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:713: failed to get kubectl pods: args "kubectl --context functional-200955 get po -A" : exit status 1
functional_test.go:717: expected stderr to be empty but got *"The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?\n"*: args "kubectl --context functional-200955 get po -A"
functional_test.go:720: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-200955 get po -A"
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-200955
helpers_test.go:244: (dbg) docker inspect functional-200955:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8d53cd00da87a9082532fc43ffe108cc46c100c4d52d598de1abc5c03d05f0a2",
	        "Created": "2025-12-13T10:21:24.063231347Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 935996,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T10:21:24.120776444Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/8d53cd00da87a9082532fc43ffe108cc46c100c4d52d598de1abc5c03d05f0a2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8d53cd00da87a9082532fc43ffe108cc46c100c4d52d598de1abc5c03d05f0a2/hostname",
	        "HostsPath": "/var/lib/docker/containers/8d53cd00da87a9082532fc43ffe108cc46c100c4d52d598de1abc5c03d05f0a2/hosts",
	        "LogPath": "/var/lib/docker/containers/8d53cd00da87a9082532fc43ffe108cc46c100c4d52d598de1abc5c03d05f0a2/8d53cd00da87a9082532fc43ffe108cc46c100c4d52d598de1abc5c03d05f0a2-json.log",
	        "Name": "/functional-200955",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-200955:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-200955",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8d53cd00da87a9082532fc43ffe108cc46c100c4d52d598de1abc5c03d05f0a2",
	                "LowerDir": "/var/lib/docker/overlay2/88eeb10fcc1b499e31bc1f6077feaba1c1c5b1a510066e3c5f3c7fab1032a7a8-init/diff:/var/lib/docker/overlay2/ae644fe0cc2841f5eea1cee1fab5fa62406b5368ff2c4f1e7ca42815e94a37ad/diff",
	                "MergedDir": "/var/lib/docker/overlay2/88eeb10fcc1b499e31bc1f6077feaba1c1c5b1a510066e3c5f3c7fab1032a7a8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/88eeb10fcc1b499e31bc1f6077feaba1c1c5b1a510066e3c5f3c7fab1032a7a8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/88eeb10fcc1b499e31bc1f6077feaba1c1c5b1a510066e3c5f3c7fab1032a7a8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-200955",
	                "Source": "/var/lib/docker/volumes/functional-200955/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-200955",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-200955",
	                "name.minikube.sigs.k8s.io": "functional-200955",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "766cddaf684c9eda3444b59c94594c94772112ec8d9beb3bf9ab0dee27a031f7",
	            "SandboxKey": "/var/run/docker/netns/766cddaf684c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33523"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33524"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33527"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33525"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33526"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-200955": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "26:41:8f:b5:13:ba",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "cc1684d1fcbfd40cf35af7d1687322fe1e1f6c4d0d51bbc510daab317bab57d4",
	                    "EndpointID": "480d7cd674d03dbe8a8b029c866cc993844939c5b39aa63c9b0d9188a61c29a3",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-200955",
	                        "8d53cd00da87"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-200955 -n functional-200955
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-200955 -n functional-200955: exit status 2 (337.419186ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-200955 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p functional-200955 logs -n 25: (1.086796009s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                       ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-769798 ssh sudo cat /usr/share/ca-certificates/907484.pem                                                                              │ functional-769798 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │ 13 Dec 25 10:21 UTC │
	│ ssh            │ functional-769798 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                          │ functional-769798 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │ 13 Dec 25 10:21 UTC │
	│ ssh            │ functional-769798 ssh sudo cat /etc/test/nested/copy/907484/hosts                                                                                 │ functional-769798 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │ 13 Dec 25 10:21 UTC │
	│ ssh            │ functional-769798 ssh sudo cat /etc/ssl/certs/9074842.pem                                                                                         │ functional-769798 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │ 13 Dec 25 10:21 UTC │
	│ cp             │ functional-769798 cp testdata/cp-test.txt /home/docker/cp-test.txt                                                                                │ functional-769798 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │ 13 Dec 25 10:21 UTC │
	│ ssh            │ functional-769798 ssh sudo cat /usr/share/ca-certificates/9074842.pem                                                                             │ functional-769798 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │ 13 Dec 25 10:21 UTC │
	│ ssh            │ functional-769798 ssh -n functional-769798 sudo cat /home/docker/cp-test.txt                                                                      │ functional-769798 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │ 13 Dec 25 10:21 UTC │
	│ ssh            │ functional-769798 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                          │ functional-769798 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │ 13 Dec 25 10:21 UTC │
	│ cp             │ functional-769798 cp functional-769798:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1526269303/001/cp-test.txt                        │ functional-769798 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │ 13 Dec 25 10:21 UTC │
	│ ssh            │ functional-769798 ssh -n functional-769798 sudo cat /home/docker/cp-test.txt                                                                      │ functional-769798 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │ 13 Dec 25 10:21 UTC │
	│ cp             │ functional-769798 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                                         │ functional-769798 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │ 13 Dec 25 10:21 UTC │
	│ image          │ functional-769798 image ls --format short --alsologtostderr                                                                                       │ functional-769798 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │ 13 Dec 25 10:21 UTC │
	│ ssh            │ functional-769798 ssh -n functional-769798 sudo cat /tmp/does/not/exist/cp-test.txt                                                               │ functional-769798 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │ 13 Dec 25 10:21 UTC │
	│ image          │ functional-769798 image ls --format yaml --alsologtostderr                                                                                        │ functional-769798 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │ 13 Dec 25 10:21 UTC │
	│ ssh            │ functional-769798 ssh pgrep buildkitd                                                                                                             │ functional-769798 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │                     │
	│ image          │ functional-769798 image ls --format json --alsologtostderr                                                                                        │ functional-769798 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │ 13 Dec 25 10:21 UTC │
	│ image          │ functional-769798 image build -t localhost/my-image:functional-769798 testdata/build --alsologtostderr                                            │ functional-769798 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │ 13 Dec 25 10:21 UTC │
	│ image          │ functional-769798 image ls --format table --alsologtostderr                                                                                       │ functional-769798 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │ 13 Dec 25 10:21 UTC │
	│ update-context │ functional-769798 update-context --alsologtostderr -v=2                                                                                           │ functional-769798 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │ 13 Dec 25 10:21 UTC │
	│ update-context │ functional-769798 update-context --alsologtostderr -v=2                                                                                           │ functional-769798 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │ 13 Dec 25 10:21 UTC │
	│ update-context │ functional-769798 update-context --alsologtostderr -v=2                                                                                           │ functional-769798 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │ 13 Dec 25 10:21 UTC │
	│ image          │ functional-769798 image ls                                                                                                                        │ functional-769798 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │ 13 Dec 25 10:21 UTC │
	│ delete         │ -p functional-769798                                                                                                                              │ functional-769798 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │ 13 Dec 25 10:21 UTC │
	│ start          │ -p functional-200955 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │                     │
	│ start          │ -p functional-200955 --alsologtostderr -v=8                                                                                                       │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:29 UTC │                     │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 10:29:41
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 10:29:41.597851  941476 out.go:360] Setting OutFile to fd 1 ...
	I1213 10:29:41.597968  941476 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:29:41.597980  941476 out.go:374] Setting ErrFile to fd 2...
	I1213 10:29:41.597985  941476 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:29:41.598264  941476 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-904040/.minikube/bin
	I1213 10:29:41.598640  941476 out.go:368] Setting JSON to false
	I1213 10:29:41.599496  941476 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":18731,"bootTime":1765603051,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1213 10:29:41.599570  941476 start.go:143] virtualization:  
	I1213 10:29:41.603284  941476 out.go:179] * [functional-200955] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 10:29:41.606132  941476 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 10:29:41.606240  941476 notify.go:221] Checking for updates...
	I1213 10:29:41.611909  941476 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 10:29:41.614766  941476 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22128-904040/kubeconfig
	I1213 10:29:41.617588  941476 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-904040/.minikube
	I1213 10:29:41.620495  941476 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 10:29:41.623575  941476 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 10:29:41.626951  941476 config.go:182] Loaded profile config "functional-200955": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 10:29:41.627063  941476 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 10:29:41.660528  941476 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 10:29:41.660648  941476 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:29:41.716071  941476 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 10:29:41.706597811 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:29:41.716181  941476 docker.go:319] overlay module found
	I1213 10:29:41.719241  941476 out.go:179] * Using the docker driver based on existing profile
	I1213 10:29:41.721997  941476 start.go:309] selected driver: docker
	I1213 10:29:41.722027  941476 start.go:927] validating driver "docker" against &{Name:functional-200955 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-200955 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:29:41.722127  941476 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 10:29:41.722252  941476 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:29:41.778165  941476 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 10:29:41.768783539 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:29:41.778600  941476 cni.go:84] Creating CNI manager for ""
	I1213 10:29:41.778650  941476 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 10:29:41.778703  941476 start.go:353] cluster config:
	{Name:functional-200955 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-200955 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:29:41.781806  941476 out.go:179] * Starting "functional-200955" primary control-plane node in "functional-200955" cluster
	I1213 10:29:41.784501  941476 cache.go:134] Beginning downloading kic base image for docker with crio
	I1213 10:29:41.787625  941476 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 10:29:41.790577  941476 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 10:29:41.790637  941476 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22128-904040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1213 10:29:41.790650  941476 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 10:29:41.790656  941476 cache.go:65] Caching tarball of preloaded images
	I1213 10:29:41.790739  941476 preload.go:238] Found /home/jenkins/minikube-integration/22128-904040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1213 10:29:41.790750  941476 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1213 10:29:41.790859  941476 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/config.json ...
	I1213 10:29:41.809947  941476 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 10:29:41.809969  941476 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 10:29:41.809989  941476 cache.go:243] Successfully downloaded all kic artifacts
	I1213 10:29:41.810023  941476 start.go:360] acquireMachinesLock for functional-200955: {Name:mkc5e96275d9db4dc69c44a1e3c60b6575a1e73a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 10:29:41.810091  941476 start.go:364] duration metric: took 45.924µs to acquireMachinesLock for "functional-200955"
	I1213 10:29:41.810115  941476 start.go:96] Skipping create...Using existing machine configuration
	I1213 10:29:41.810124  941476 fix.go:54] fixHost starting: 
	I1213 10:29:41.810397  941476 cli_runner.go:164] Run: docker container inspect functional-200955 --format={{.State.Status}}
	I1213 10:29:41.827321  941476 fix.go:112] recreateIfNeeded on functional-200955: state=Running err=<nil>
	W1213 10:29:41.827351  941476 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 10:29:41.830448  941476 out.go:252] * Updating the running docker "functional-200955" container ...
	I1213 10:29:41.830480  941476 machine.go:94] provisionDockerMachine start ...
	I1213 10:29:41.830562  941476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-200955
	I1213 10:29:41.846863  941476 main.go:143] libmachine: Using SSH client type: native
	I1213 10:29:41.847197  941476 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33523 <nil> <nil>}
	I1213 10:29:41.847214  941476 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 10:29:41.996943  941476 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-200955
	
	I1213 10:29:41.996971  941476 ubuntu.go:182] provisioning hostname "functional-200955"
	I1213 10:29:41.997042  941476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-200955
	I1213 10:29:42.018825  941476 main.go:143] libmachine: Using SSH client type: native
	I1213 10:29:42.019169  941476 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33523 <nil> <nil>}
	I1213 10:29:42.019192  941476 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-200955 && echo "functional-200955" | sudo tee /etc/hostname
	I1213 10:29:42.186347  941476 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-200955
	
	I1213 10:29:42.186459  941476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-200955
	I1213 10:29:42.209314  941476 main.go:143] libmachine: Using SSH client type: native
	I1213 10:29:42.209694  941476 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33523 <nil> <nil>}
	I1213 10:29:42.209712  941476 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-200955' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-200955/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-200955' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 10:29:42.370026  941476 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 10:29:42.370125  941476 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22128-904040/.minikube CaCertPath:/home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22128-904040/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22128-904040/.minikube}
	I1213 10:29:42.370174  941476 ubuntu.go:190] setting up certificates
	I1213 10:29:42.370200  941476 provision.go:84] configureAuth start
	I1213 10:29:42.370268  941476 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-200955
	I1213 10:29:42.388638  941476 provision.go:143] copyHostCerts
	I1213 10:29:42.388684  941476 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22128-904040/.minikube/ca.pem
	I1213 10:29:42.388728  941476 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-904040/.minikube/ca.pem, removing ...
	I1213 10:29:42.388739  941476 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-904040/.minikube/ca.pem
	I1213 10:29:42.388819  941476 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22128-904040/.minikube/ca.pem (1082 bytes)
	I1213 10:29:42.388924  941476 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22128-904040/.minikube/cert.pem
	I1213 10:29:42.388947  941476 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-904040/.minikube/cert.pem, removing ...
	I1213 10:29:42.388956  941476 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-904040/.minikube/cert.pem
	I1213 10:29:42.388985  941476 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22128-904040/.minikube/cert.pem (1123 bytes)
	I1213 10:29:42.389034  941476 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22128-904040/.minikube/key.pem
	I1213 10:29:42.389056  941476 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-904040/.minikube/key.pem, removing ...
	I1213 10:29:42.389064  941476 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-904040/.minikube/key.pem
	I1213 10:29:42.389093  941476 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22128-904040/.minikube/key.pem (1675 bytes)
	I1213 10:29:42.389148  941476 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22128-904040/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca-key.pem org=jenkins.functional-200955 san=[127.0.0.1 192.168.49.2 functional-200955 localhost minikube]
	I1213 10:29:42.553052  941476 provision.go:177] copyRemoteCerts
	I1213 10:29:42.553125  941476 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 10:29:42.553174  941476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-200955
	I1213 10:29:42.571937  941476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33523 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/functional-200955/id_rsa Username:docker}
	I1213 10:29:42.681380  941476 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1213 10:29:42.681440  941476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 10:29:42.698297  941476 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-904040/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1213 10:29:42.698381  941476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 10:29:42.715245  941476 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-904040/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1213 10:29:42.715360  941476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 10:29:42.732152  941476 provision.go:87] duration metric: took 361.926272ms to configureAuth
	I1213 10:29:42.732184  941476 ubuntu.go:206] setting minikube options for container-runtime
	I1213 10:29:42.732358  941476 config.go:182] Loaded profile config "functional-200955": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 10:29:42.732458  941476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-200955
	I1213 10:29:42.749290  941476 main.go:143] libmachine: Using SSH client type: native
	I1213 10:29:42.749620  941476 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33523 <nil> <nil>}
	I1213 10:29:42.749643  941476 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 10:29:43.093593  941476 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 10:29:43.093619  941476 machine.go:97] duration metric: took 1.263130563s to provisionDockerMachine
	I1213 10:29:43.093630  941476 start.go:293] postStartSetup for "functional-200955" (driver="docker")
	I1213 10:29:43.093643  941476 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 10:29:43.093703  941476 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 10:29:43.093752  941476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-200955
	I1213 10:29:43.110551  941476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33523 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/functional-200955/id_rsa Username:docker}
	I1213 10:29:43.213067  941476 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 10:29:43.216076  941476 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1213 10:29:43.216096  941476 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1213 10:29:43.216102  941476 command_runner.go:130] > VERSION_ID="12"
	I1213 10:29:43.216108  941476 command_runner.go:130] > VERSION="12 (bookworm)"
	I1213 10:29:43.216112  941476 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1213 10:29:43.216116  941476 command_runner.go:130] > ID=debian
	I1213 10:29:43.216121  941476 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1213 10:29:43.216125  941476 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1213 10:29:43.216147  941476 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1213 10:29:43.216196  941476 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 10:29:43.216219  941476 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 10:29:43.216231  941476 filesync.go:126] Scanning /home/jenkins/minikube-integration/22128-904040/.minikube/addons for local assets ...
	I1213 10:29:43.216286  941476 filesync.go:126] Scanning /home/jenkins/minikube-integration/22128-904040/.minikube/files for local assets ...
	I1213 10:29:43.216365  941476 filesync.go:149] local asset: /home/jenkins/minikube-integration/22128-904040/.minikube/files/etc/ssl/certs/9074842.pem -> 9074842.pem in /etc/ssl/certs
	I1213 10:29:43.216375  941476 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-904040/.minikube/files/etc/ssl/certs/9074842.pem -> /etc/ssl/certs/9074842.pem
	I1213 10:29:43.216452  941476 filesync.go:149] local asset: /home/jenkins/minikube-integration/22128-904040/.minikube/files/etc/test/nested/copy/907484/hosts -> hosts in /etc/test/nested/copy/907484
	I1213 10:29:43.216461  941476 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-904040/.minikube/files/etc/test/nested/copy/907484/hosts -> /etc/test/nested/copy/907484/hosts
	I1213 10:29:43.216512  941476 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/907484
	I1213 10:29:43.223706  941476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/files/etc/ssl/certs/9074842.pem --> /etc/ssl/certs/9074842.pem (1708 bytes)
	I1213 10:29:43.242619  941476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/files/etc/test/nested/copy/907484/hosts --> /etc/test/nested/copy/907484/hosts (40 bytes)
	I1213 10:29:43.261652  941476 start.go:296] duration metric: took 168.007176ms for postStartSetup
	I1213 10:29:43.261748  941476 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 10:29:43.261797  941476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-200955
	I1213 10:29:43.278068  941476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33523 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/functional-200955/id_rsa Username:docker}
	I1213 10:29:43.377852  941476 command_runner.go:130] > 19%
	I1213 10:29:43.378272  941476 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 10:29:43.382521  941476 command_runner.go:130] > 159G
	I1213 10:29:43.382892  941476 fix.go:56] duration metric: took 1.572759496s for fixHost
	I1213 10:29:43.382913  941476 start.go:83] releasing machines lock for "functional-200955", held for 1.572809064s
	I1213 10:29:43.382984  941476 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-200955
	I1213 10:29:43.399315  941476 ssh_runner.go:195] Run: cat /version.json
	I1213 10:29:43.399334  941476 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 10:29:43.399371  941476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-200955
	I1213 10:29:43.399397  941476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-200955
	I1213 10:29:43.423081  941476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33523 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/functional-200955/id_rsa Username:docker}
	I1213 10:29:43.424445  941476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33523 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/functional-200955/id_rsa Username:docker}
	I1213 10:29:43.612877  941476 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1213 10:29:43.615557  941476 command_runner.go:130] > {"iso_version": "v1.37.0-1765151505-21409", "kicbase_version": "v0.0.48-1765275396-22083", "minikube_version": "v1.37.0", "commit": "9f3959633d311997d75aab86f8ff840f224c6486"}
	I1213 10:29:43.615725  941476 ssh_runner.go:195] Run: systemctl --version
	I1213 10:29:43.621711  941476 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1213 10:29:43.621746  941476 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1213 10:29:43.622124  941476 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 10:29:43.667216  941476 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1213 10:29:43.671902  941476 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1213 10:29:43.672160  941476 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 10:29:43.672241  941476 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 10:29:43.679969  941476 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 10:29:43.679994  941476 start.go:496] detecting cgroup driver to use...
	I1213 10:29:43.680025  941476 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 10:29:43.680082  941476 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 10:29:43.694816  941476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 10:29:43.708840  941476 docker.go:218] disabling cri-docker service (if available) ...
	I1213 10:29:43.708902  941476 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 10:29:43.727390  941476 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 10:29:43.741194  941476 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 10:29:43.853170  941476 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 10:29:43.965117  941476 docker.go:234] disabling docker service ...
	I1213 10:29:43.965193  941476 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 10:29:43.981069  941476 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 10:29:43.993651  941476 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 10:29:44.106510  941476 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 10:29:44.230950  941476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 10:29:44.243823  941476 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 10:29:44.258241  941476 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1213 10:29:44.259524  941476 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 10:29:44.259625  941476 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:29:44.267965  941476 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 10:29:44.268046  941476 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:29:44.277059  941476 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:29:44.285643  941476 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:29:44.295522  941476 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 10:29:44.303650  941476 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:29:44.312274  941476 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:29:44.320905  941476 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:29:44.329531  941476 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 10:29:44.336129  941476 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1213 10:29:44.337017  941476 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 10:29:44.344665  941476 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:29:44.479199  941476 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 10:29:44.656815  941476 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 10:29:44.656943  941476 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 10:29:44.660542  941476 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1213 10:29:44.660573  941476 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1213 10:29:44.660581  941476 command_runner.go:130] > Device: 0,72	Inode: 1640        Links: 1
	I1213 10:29:44.660588  941476 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1213 10:29:44.660594  941476 command_runner.go:130] > Access: 2025-12-13 10:29:44.589977594 +0000
	I1213 10:29:44.660602  941476 command_runner.go:130] > Modify: 2025-12-13 10:29:44.589977594 +0000
	I1213 10:29:44.660608  941476 command_runner.go:130] > Change: 2025-12-13 10:29:44.589977594 +0000
	I1213 10:29:44.660615  941476 command_runner.go:130] >  Birth: -
	I1213 10:29:44.660643  941476 start.go:564] Will wait 60s for crictl version
	I1213 10:29:44.660697  941476 ssh_runner.go:195] Run: which crictl
	I1213 10:29:44.664032  941476 command_runner.go:130] > /usr/local/bin/crictl
	I1213 10:29:44.664157  941476 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 10:29:44.686934  941476 command_runner.go:130] > Version:  0.1.0
	I1213 10:29:44.686958  941476 command_runner.go:130] > RuntimeName:  cri-o
	I1213 10:29:44.686965  941476 command_runner.go:130] > RuntimeVersion:  1.34.3
	I1213 10:29:44.686970  941476 command_runner.go:130] > RuntimeApiVersion:  v1
	I1213 10:29:44.687007  941476 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1213 10:29:44.687101  941476 ssh_runner.go:195] Run: crio --version
	I1213 10:29:44.715374  941476 command_runner.go:130] > crio version 1.34.3
	I1213 10:29:44.715400  941476 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1213 10:29:44.715407  941476 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1213 10:29:44.715412  941476 command_runner.go:130] >    GitTreeState:   dirty
	I1213 10:29:44.715417  941476 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1213 10:29:44.715422  941476 command_runner.go:130] >    GoVersion:      go1.24.6
	I1213 10:29:44.715435  941476 command_runner.go:130] >    Compiler:       gc
	I1213 10:29:44.715442  941476 command_runner.go:130] >    Platform:       linux/arm64
	I1213 10:29:44.715446  941476 command_runner.go:130] >    Linkmode:       static
	I1213 10:29:44.715453  941476 command_runner.go:130] >    BuildTags:
	I1213 10:29:44.715457  941476 command_runner.go:130] >      static
	I1213 10:29:44.715461  941476 command_runner.go:130] >      netgo
	I1213 10:29:44.715464  941476 command_runner.go:130] >      osusergo
	I1213 10:29:44.715476  941476 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1213 10:29:44.715480  941476 command_runner.go:130] >      seccomp
	I1213 10:29:44.715484  941476 command_runner.go:130] >      apparmor
	I1213 10:29:44.715492  941476 command_runner.go:130] >      selinux
	I1213 10:29:44.715496  941476 command_runner.go:130] >    LDFlags:          unknown
	I1213 10:29:44.715504  941476 command_runner.go:130] >    SeccompEnabled:   true
	I1213 10:29:44.715508  941476 command_runner.go:130] >    AppArmorEnabled:  false
	I1213 10:29:44.717596  941476 ssh_runner.go:195] Run: crio --version
	I1213 10:29:44.744267  941476 command_runner.go:130] > crio version 1.34.3
	I1213 10:29:44.744305  941476 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1213 10:29:44.744312  941476 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1213 10:29:44.744317  941476 command_runner.go:130] >    GitTreeState:   dirty
	I1213 10:29:44.744322  941476 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1213 10:29:44.744327  941476 command_runner.go:130] >    GoVersion:      go1.24.6
	I1213 10:29:44.744331  941476 command_runner.go:130] >    Compiler:       gc
	I1213 10:29:44.744337  941476 command_runner.go:130] >    Platform:       linux/arm64
	I1213 10:29:44.744341  941476 command_runner.go:130] >    Linkmode:       static
	I1213 10:29:44.744346  941476 command_runner.go:130] >    BuildTags:
	I1213 10:29:44.744350  941476 command_runner.go:130] >      static
	I1213 10:29:44.744376  941476 command_runner.go:130] >      netgo
	I1213 10:29:44.744385  941476 command_runner.go:130] >      osusergo
	I1213 10:29:44.744390  941476 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1213 10:29:44.744393  941476 command_runner.go:130] >      seccomp
	I1213 10:29:44.744397  941476 command_runner.go:130] >      apparmor
	I1213 10:29:44.744406  941476 command_runner.go:130] >      selinux
	I1213 10:29:44.744411  941476 command_runner.go:130] >    LDFlags:          unknown
	I1213 10:29:44.744419  941476 command_runner.go:130] >    SeccompEnabled:   true
	I1213 10:29:44.744424  941476 command_runner.go:130] >    AppArmorEnabled:  false
	I1213 10:29:44.751529  941476 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1213 10:29:44.754410  941476 cli_runner.go:164] Run: docker network inspect functional-200955 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 10:29:44.770603  941476 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1213 10:29:44.774419  941476 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1213 10:29:44.774622  941476 kubeadm.go:884] updating cluster {Name:functional-200955 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-200955 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 10:29:44.774752  941476 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 10:29:44.774840  941476 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 10:29:44.811833  941476 command_runner.go:130] > {
	I1213 10:29:44.811851  941476 command_runner.go:130] >   "images":  [
	I1213 10:29:44.811855  941476 command_runner.go:130] >     {
	I1213 10:29:44.811864  941476 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1213 10:29:44.811869  941476 command_runner.go:130] >       "repoTags":  [
	I1213 10:29:44.811875  941476 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1213 10:29:44.811879  941476 command_runner.go:130] >       ],
	I1213 10:29:44.811883  941476 command_runner.go:130] >       "repoDigests":  [
	I1213 10:29:44.811892  941476 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1213 10:29:44.811900  941476 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1213 10:29:44.811904  941476 command_runner.go:130] >       ],
	I1213 10:29:44.811908  941476 command_runner.go:130] >       "size":  "111333938",
	I1213 10:29:44.811912  941476 command_runner.go:130] >       "username":  "",
	I1213 10:29:44.811920  941476 command_runner.go:130] >       "pinned":  false
	I1213 10:29:44.811923  941476 command_runner.go:130] >     },
	I1213 10:29:44.811927  941476 command_runner.go:130] >     {
	I1213 10:29:44.811933  941476 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1213 10:29:44.811938  941476 command_runner.go:130] >       "repoTags":  [
	I1213 10:29:44.811944  941476 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1213 10:29:44.811947  941476 command_runner.go:130] >       ],
	I1213 10:29:44.811951  941476 command_runner.go:130] >       "repoDigests":  [
	I1213 10:29:44.811959  941476 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1213 10:29:44.811968  941476 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1213 10:29:44.811980  941476 command_runner.go:130] >       ],
	I1213 10:29:44.811984  941476 command_runner.go:130] >       "size":  "29037500",
	I1213 10:29:44.811988  941476 command_runner.go:130] >       "username":  "",
	I1213 10:29:44.811994  941476 command_runner.go:130] >       "pinned":  false
	I1213 10:29:44.811997  941476 command_runner.go:130] >     },
	I1213 10:29:44.812000  941476 command_runner.go:130] >     {
	I1213 10:29:44.812007  941476 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1213 10:29:44.812011  941476 command_runner.go:130] >       "repoTags":  [
	I1213 10:29:44.812017  941476 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1213 10:29:44.812020  941476 command_runner.go:130] >       ],
	I1213 10:29:44.812024  941476 command_runner.go:130] >       "repoDigests":  [
	I1213 10:29:44.812032  941476 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1213 10:29:44.812040  941476 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1213 10:29:44.812047  941476 command_runner.go:130] >       ],
	I1213 10:29:44.812051  941476 command_runner.go:130] >       "size":  "74491780",
	I1213 10:29:44.812056  941476 command_runner.go:130] >       "username":  "nonroot",
	I1213 10:29:44.812059  941476 command_runner.go:130] >       "pinned":  false
	I1213 10:29:44.812062  941476 command_runner.go:130] >     },
	I1213 10:29:44.812066  941476 command_runner.go:130] >     {
	I1213 10:29:44.812073  941476 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1213 10:29:44.812076  941476 command_runner.go:130] >       "repoTags":  [
	I1213 10:29:44.812081  941476 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1213 10:29:44.812085  941476 command_runner.go:130] >       ],
	I1213 10:29:44.812089  941476 command_runner.go:130] >       "repoDigests":  [
	I1213 10:29:44.812097  941476 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1213 10:29:44.812104  941476 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1213 10:29:44.812109  941476 command_runner.go:130] >       ],
	I1213 10:29:44.812113  941476 command_runner.go:130] >       "size":  "60857170",
	I1213 10:29:44.812116  941476 command_runner.go:130] >       "uid":  {
	I1213 10:29:44.812120  941476 command_runner.go:130] >         "value":  "0"
	I1213 10:29:44.812123  941476 command_runner.go:130] >       },
	I1213 10:29:44.812132  941476 command_runner.go:130] >       "username":  "",
	I1213 10:29:44.812136  941476 command_runner.go:130] >       "pinned":  false
	I1213 10:29:44.812143  941476 command_runner.go:130] >     },
	I1213 10:29:44.812146  941476 command_runner.go:130] >     {
	I1213 10:29:44.812152  941476 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1213 10:29:44.812156  941476 command_runner.go:130] >       "repoTags":  [
	I1213 10:29:44.812161  941476 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1213 10:29:44.812164  941476 command_runner.go:130] >       ],
	I1213 10:29:44.812168  941476 command_runner.go:130] >       "repoDigests":  [
	I1213 10:29:44.812176  941476 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1213 10:29:44.812184  941476 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1213 10:29:44.812187  941476 command_runner.go:130] >       ],
	I1213 10:29:44.812191  941476 command_runner.go:130] >       "size":  "84949999",
	I1213 10:29:44.812195  941476 command_runner.go:130] >       "uid":  {
	I1213 10:29:44.812198  941476 command_runner.go:130] >         "value":  "0"
	I1213 10:29:44.812201  941476 command_runner.go:130] >       },
	I1213 10:29:44.812204  941476 command_runner.go:130] >       "username":  "",
	I1213 10:29:44.812208  941476 command_runner.go:130] >       "pinned":  false
	I1213 10:29:44.812211  941476 command_runner.go:130] >     },
	I1213 10:29:44.812213  941476 command_runner.go:130] >     {
	I1213 10:29:44.812220  941476 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1213 10:29:44.812224  941476 command_runner.go:130] >       "repoTags":  [
	I1213 10:29:44.812230  941476 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1213 10:29:44.812233  941476 command_runner.go:130] >       ],
	I1213 10:29:44.812236  941476 command_runner.go:130] >       "repoDigests":  [
	I1213 10:29:44.812244  941476 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1213 10:29:44.812253  941476 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1213 10:29:44.812256  941476 command_runner.go:130] >       ],
	I1213 10:29:44.812259  941476 command_runner.go:130] >       "size":  "72170325",
	I1213 10:29:44.812263  941476 command_runner.go:130] >       "uid":  {
	I1213 10:29:44.812266  941476 command_runner.go:130] >         "value":  "0"
	I1213 10:29:44.812269  941476 command_runner.go:130] >       },
	I1213 10:29:44.812273  941476 command_runner.go:130] >       "username":  "",
	I1213 10:29:44.812277  941476 command_runner.go:130] >       "pinned":  false
	I1213 10:29:44.812280  941476 command_runner.go:130] >     },
	I1213 10:29:44.812286  941476 command_runner.go:130] >     {
	I1213 10:29:44.812293  941476 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1213 10:29:44.812296  941476 command_runner.go:130] >       "repoTags":  [
	I1213 10:29:44.812302  941476 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1213 10:29:44.812304  941476 command_runner.go:130] >       ],
	I1213 10:29:44.812308  941476 command_runner.go:130] >       "repoDigests":  [
	I1213 10:29:44.812316  941476 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1213 10:29:44.812323  941476 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1213 10:29:44.812326  941476 command_runner.go:130] >       ],
	I1213 10:29:44.812330  941476 command_runner.go:130] >       "size":  "74106775",
	I1213 10:29:44.812334  941476 command_runner.go:130] >       "username":  "",
	I1213 10:29:44.812337  941476 command_runner.go:130] >       "pinned":  false
	I1213 10:29:44.812340  941476 command_runner.go:130] >     },
	I1213 10:29:44.812343  941476 command_runner.go:130] >     {
	I1213 10:29:44.812349  941476 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1213 10:29:44.812353  941476 command_runner.go:130] >       "repoTags":  [
	I1213 10:29:44.812358  941476 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1213 10:29:44.812361  941476 command_runner.go:130] >       ],
	I1213 10:29:44.812364  941476 command_runner.go:130] >       "repoDigests":  [
	I1213 10:29:44.812372  941476 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1213 10:29:44.812390  941476 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1213 10:29:44.812393  941476 command_runner.go:130] >       ],
	I1213 10:29:44.812397  941476 command_runner.go:130] >       "size":  "49822549",
	I1213 10:29:44.812400  941476 command_runner.go:130] >       "uid":  {
	I1213 10:29:44.812405  941476 command_runner.go:130] >         "value":  "0"
	I1213 10:29:44.812408  941476 command_runner.go:130] >       },
	I1213 10:29:44.812412  941476 command_runner.go:130] >       "username":  "",
	I1213 10:29:44.812416  941476 command_runner.go:130] >       "pinned":  false
	I1213 10:29:44.812419  941476 command_runner.go:130] >     },
	I1213 10:29:44.812422  941476 command_runner.go:130] >     {
	I1213 10:29:44.812428  941476 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1213 10:29:44.812432  941476 command_runner.go:130] >       "repoTags":  [
	I1213 10:29:44.812436  941476 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1213 10:29:44.812442  941476 command_runner.go:130] >       ],
	I1213 10:29:44.812446  941476 command_runner.go:130] >       "repoDigests":  [
	I1213 10:29:44.812454  941476 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1213 10:29:44.812462  941476 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1213 10:29:44.812464  941476 command_runner.go:130] >       ],
	I1213 10:29:44.812468  941476 command_runner.go:130] >       "size":  "519884",
	I1213 10:29:44.812471  941476 command_runner.go:130] >       "uid":  {
	I1213 10:29:44.812475  941476 command_runner.go:130] >         "value":  "65535"
	I1213 10:29:44.812478  941476 command_runner.go:130] >       },
	I1213 10:29:44.812482  941476 command_runner.go:130] >       "username":  "",
	I1213 10:29:44.812485  941476 command_runner.go:130] >       "pinned":  true
	I1213 10:29:44.812488  941476 command_runner.go:130] >     }
	I1213 10:29:44.812491  941476 command_runner.go:130] >   ]
	I1213 10:29:44.812494  941476 command_runner.go:130] > }
	I1213 10:29:44.812656  941476 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 10:29:44.812664  941476 crio.go:433] Images already preloaded, skipping extraction
	I1213 10:29:44.812720  941476 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 10:29:44.834840  941476 command_runner.go:130] > {
	I1213 10:29:44.834859  941476 command_runner.go:130] >   "images":  [
	I1213 10:29:44.834863  941476 command_runner.go:130] >     {
	I1213 10:29:44.834871  941476 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1213 10:29:44.834878  941476 command_runner.go:130] >       "repoTags":  [
	I1213 10:29:44.834893  941476 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1213 10:29:44.834897  941476 command_runner.go:130] >       ],
	I1213 10:29:44.834903  941476 command_runner.go:130] >       "repoDigests":  [
	I1213 10:29:44.834913  941476 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1213 10:29:44.834921  941476 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1213 10:29:44.834924  941476 command_runner.go:130] >       ],
	I1213 10:29:44.834928  941476 command_runner.go:130] >       "size":  "111333938",
	I1213 10:29:44.834932  941476 command_runner.go:130] >       "username":  "",
	I1213 10:29:44.834941  941476 command_runner.go:130] >       "pinned":  false
	I1213 10:29:44.834944  941476 command_runner.go:130] >     },
	I1213 10:29:44.834947  941476 command_runner.go:130] >     {
	I1213 10:29:44.834953  941476 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1213 10:29:44.834957  941476 command_runner.go:130] >       "repoTags":  [
	I1213 10:29:44.834962  941476 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1213 10:29:44.834965  941476 command_runner.go:130] >       ],
	I1213 10:29:44.834969  941476 command_runner.go:130] >       "repoDigests":  [
	I1213 10:29:44.834977  941476 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1213 10:29:44.834986  941476 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1213 10:29:44.834989  941476 command_runner.go:130] >       ],
	I1213 10:29:44.834993  941476 command_runner.go:130] >       "size":  "29037500",
	I1213 10:29:44.834997  941476 command_runner.go:130] >       "username":  "",
	I1213 10:29:44.835006  941476 command_runner.go:130] >       "pinned":  false
	I1213 10:29:44.835009  941476 command_runner.go:130] >     },
	I1213 10:29:44.835013  941476 command_runner.go:130] >     {
	I1213 10:29:44.835019  941476 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1213 10:29:44.835023  941476 command_runner.go:130] >       "repoTags":  [
	I1213 10:29:44.835028  941476 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1213 10:29:44.835032  941476 command_runner.go:130] >       ],
	I1213 10:29:44.835036  941476 command_runner.go:130] >       "repoDigests":  [
	I1213 10:29:44.835044  941476 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1213 10:29:44.835052  941476 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1213 10:29:44.835055  941476 command_runner.go:130] >       ],
	I1213 10:29:44.835058  941476 command_runner.go:130] >       "size":  "74491780",
	I1213 10:29:44.835062  941476 command_runner.go:130] >       "username":  "nonroot",
	I1213 10:29:44.835066  941476 command_runner.go:130] >       "pinned":  false
	I1213 10:29:44.835069  941476 command_runner.go:130] >     },
	I1213 10:29:44.835073  941476 command_runner.go:130] >     {
	I1213 10:29:44.835080  941476 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1213 10:29:44.835083  941476 command_runner.go:130] >       "repoTags":  [
	I1213 10:29:44.835088  941476 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1213 10:29:44.835093  941476 command_runner.go:130] >       ],
	I1213 10:29:44.835100  941476 command_runner.go:130] >       "repoDigests":  [
	I1213 10:29:44.835108  941476 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1213 10:29:44.835116  941476 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1213 10:29:44.835119  941476 command_runner.go:130] >       ],
	I1213 10:29:44.835123  941476 command_runner.go:130] >       "size":  "60857170",
	I1213 10:29:44.835127  941476 command_runner.go:130] >       "uid":  {
	I1213 10:29:44.835131  941476 command_runner.go:130] >         "value":  "0"
	I1213 10:29:44.835134  941476 command_runner.go:130] >       },
	I1213 10:29:44.835147  941476 command_runner.go:130] >       "username":  "",
	I1213 10:29:44.835151  941476 command_runner.go:130] >       "pinned":  false
	I1213 10:29:44.835154  941476 command_runner.go:130] >     },
	I1213 10:29:44.835157  941476 command_runner.go:130] >     {
	I1213 10:29:44.835163  941476 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1213 10:29:44.835167  941476 command_runner.go:130] >       "repoTags":  [
	I1213 10:29:44.835172  941476 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1213 10:29:44.835175  941476 command_runner.go:130] >       ],
	I1213 10:29:44.835179  941476 command_runner.go:130] >       "repoDigests":  [
	I1213 10:29:44.835187  941476 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1213 10:29:44.835195  941476 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1213 10:29:44.835197  941476 command_runner.go:130] >       ],
	I1213 10:29:44.835201  941476 command_runner.go:130] >       "size":  "84949999",
	I1213 10:29:44.835205  941476 command_runner.go:130] >       "uid":  {
	I1213 10:29:44.835209  941476 command_runner.go:130] >         "value":  "0"
	I1213 10:29:44.835212  941476 command_runner.go:130] >       },
	I1213 10:29:44.835215  941476 command_runner.go:130] >       "username":  "",
	I1213 10:29:44.835219  941476 command_runner.go:130] >       "pinned":  false
	I1213 10:29:44.835222  941476 command_runner.go:130] >     },
	I1213 10:29:44.835224  941476 command_runner.go:130] >     {
	I1213 10:29:44.835231  941476 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1213 10:29:44.835234  941476 command_runner.go:130] >       "repoTags":  [
	I1213 10:29:44.835240  941476 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1213 10:29:44.835243  941476 command_runner.go:130] >       ],
	I1213 10:29:44.835247  941476 command_runner.go:130] >       "repoDigests":  [
	I1213 10:29:44.835261  941476 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1213 10:29:44.835270  941476 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1213 10:29:44.835273  941476 command_runner.go:130] >       ],
	I1213 10:29:44.835277  941476 command_runner.go:130] >       "size":  "72170325",
	I1213 10:29:44.835281  941476 command_runner.go:130] >       "uid":  {
	I1213 10:29:44.835285  941476 command_runner.go:130] >         "value":  "0"
	I1213 10:29:44.835288  941476 command_runner.go:130] >       },
	I1213 10:29:44.835292  941476 command_runner.go:130] >       "username":  "",
	I1213 10:29:44.835295  941476 command_runner.go:130] >       "pinned":  false
	I1213 10:29:44.835298  941476 command_runner.go:130] >     },
	I1213 10:29:44.835302  941476 command_runner.go:130] >     {
	I1213 10:29:44.835309  941476 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1213 10:29:44.835312  941476 command_runner.go:130] >       "repoTags":  [
	I1213 10:29:44.835318  941476 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1213 10:29:44.835320  941476 command_runner.go:130] >       ],
	I1213 10:29:44.835324  941476 command_runner.go:130] >       "repoDigests":  [
	I1213 10:29:44.835332  941476 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1213 10:29:44.835340  941476 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1213 10:29:44.835343  941476 command_runner.go:130] >       ],
	I1213 10:29:44.835347  941476 command_runner.go:130] >       "size":  "74106775",
	I1213 10:29:44.835351  941476 command_runner.go:130] >       "username":  "",
	I1213 10:29:44.835355  941476 command_runner.go:130] >       "pinned":  false
	I1213 10:29:44.835358  941476 command_runner.go:130] >     },
	I1213 10:29:44.835361  941476 command_runner.go:130] >     {
	I1213 10:29:44.835367  941476 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1213 10:29:44.835371  941476 command_runner.go:130] >       "repoTags":  [
	I1213 10:29:44.835376  941476 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1213 10:29:44.835379  941476 command_runner.go:130] >       ],
	I1213 10:29:44.835383  941476 command_runner.go:130] >       "repoDigests":  [
	I1213 10:29:44.835390  941476 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1213 10:29:44.835407  941476 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1213 10:29:44.835411  941476 command_runner.go:130] >       ],
	I1213 10:29:44.835415  941476 command_runner.go:130] >       "size":  "49822549",
	I1213 10:29:44.835422  941476 command_runner.go:130] >       "uid":  {
	I1213 10:29:44.835426  941476 command_runner.go:130] >         "value":  "0"
	I1213 10:29:44.835429  941476 command_runner.go:130] >       },
	I1213 10:29:44.835433  941476 command_runner.go:130] >       "username":  "",
	I1213 10:29:44.835436  941476 command_runner.go:130] >       "pinned":  false
	I1213 10:29:44.835439  941476 command_runner.go:130] >     },
	I1213 10:29:44.835442  941476 command_runner.go:130] >     {
	I1213 10:29:44.835449  941476 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1213 10:29:44.835452  941476 command_runner.go:130] >       "repoTags":  [
	I1213 10:29:44.835457  941476 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1213 10:29:44.835460  941476 command_runner.go:130] >       ],
	I1213 10:29:44.835463  941476 command_runner.go:130] >       "repoDigests":  [
	I1213 10:29:44.835470  941476 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1213 10:29:44.835478  941476 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1213 10:29:44.835481  941476 command_runner.go:130] >       ],
	I1213 10:29:44.835485  941476 command_runner.go:130] >       "size":  "519884",
	I1213 10:29:44.835489  941476 command_runner.go:130] >       "uid":  {
	I1213 10:29:44.835492  941476 command_runner.go:130] >         "value":  "65535"
	I1213 10:29:44.835495  941476 command_runner.go:130] >       },
	I1213 10:29:44.835499  941476 command_runner.go:130] >       "username":  "",
	I1213 10:29:44.835503  941476 command_runner.go:130] >       "pinned":  true
	I1213 10:29:44.835506  941476 command_runner.go:130] >     }
	I1213 10:29:44.835508  941476 command_runner.go:130] >   ]
	I1213 10:29:44.835512  941476 command_runner.go:130] > }
	I1213 10:29:44.838144  941476 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 10:29:44.838206  941476 cache_images.go:86] Images are preloaded, skipping loading
	I1213 10:29:44.838219  941476 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1213 10:29:44.838324  941476 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-200955 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-200955 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 10:29:44.838426  941476 ssh_runner.go:195] Run: crio config
	I1213 10:29:44.886075  941476 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1213 10:29:44.886098  941476 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1213 10:29:44.886106  941476 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1213 10:29:44.886110  941476 command_runner.go:130] > #
	I1213 10:29:44.886117  941476 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1213 10:29:44.886124  941476 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1213 10:29:44.886130  941476 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1213 10:29:44.886139  941476 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1213 10:29:44.886142  941476 command_runner.go:130] > # reload'.
	I1213 10:29:44.886162  941476 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1213 10:29:44.886169  941476 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1213 10:29:44.886175  941476 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1213 10:29:44.886181  941476 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1213 10:29:44.886184  941476 command_runner.go:130] > [crio]
	I1213 10:29:44.886190  941476 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1213 10:29:44.886195  941476 command_runner.go:130] > # containers images, in this directory.
	I1213 10:29:44.886932  941476 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1213 10:29:44.886948  941476 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1213 10:29:44.887520  941476 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1213 10:29:44.887536  941476 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1213 10:29:44.887990  941476 command_runner.go:130] > # imagestore = ""
	I1213 10:29:44.888002  941476 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1213 10:29:44.888019  941476 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1213 10:29:44.888390  941476 command_runner.go:130] > # storage_driver = "overlay"
	I1213 10:29:44.888402  941476 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1213 10:29:44.888409  941476 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1213 10:29:44.888578  941476 command_runner.go:130] > # storage_option = [
	I1213 10:29:44.888743  941476 command_runner.go:130] > # ]
	I1213 10:29:44.888754  941476 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1213 10:29:44.888761  941476 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1213 10:29:44.888765  941476 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1213 10:29:44.888771  941476 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1213 10:29:44.888787  941476 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1213 10:29:44.888792  941476 command_runner.go:130] > # always happen on a node reboot
	I1213 10:29:44.888797  941476 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1213 10:29:44.888807  941476 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1213 10:29:44.888813  941476 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1213 10:29:44.888818  941476 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1213 10:29:44.888822  941476 command_runner.go:130] > # version_file_persist = ""
	I1213 10:29:44.888829  941476 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1213 10:29:44.888839  941476 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1213 10:29:44.888843  941476 command_runner.go:130] > # internal_wipe = true
	I1213 10:29:44.888851  941476 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1213 10:29:44.888856  941476 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1213 10:29:44.888860  941476 command_runner.go:130] > # internal_repair = true
	I1213 10:29:44.888869  941476 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1213 10:29:44.888875  941476 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1213 10:29:44.888881  941476 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1213 10:29:44.888886  941476 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1213 10:29:44.888892  941476 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1213 10:29:44.888895  941476 command_runner.go:130] > [crio.api]
	I1213 10:29:44.888901  941476 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1213 10:29:44.888905  941476 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1213 10:29:44.888910  941476 command_runner.go:130] > # IP address on which the stream server will listen.
	I1213 10:29:44.888914  941476 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1213 10:29:44.888921  941476 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1213 10:29:44.888926  941476 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1213 10:29:44.888929  941476 command_runner.go:130] > # stream_port = "0"
	I1213 10:29:44.888934  941476 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1213 10:29:44.888938  941476 command_runner.go:130] > # stream_enable_tls = false
	I1213 10:29:44.888944  941476 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1213 10:29:44.889110  941476 command_runner.go:130] > # stream_idle_timeout = ""
	I1213 10:29:44.889121  941476 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1213 10:29:44.889127  941476 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1213 10:29:44.889131  941476 command_runner.go:130] > # stream_tls_cert = ""
	I1213 10:29:44.889137  941476 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1213 10:29:44.889143  941476 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1213 10:29:44.889156  941476 command_runner.go:130] > # stream_tls_key = ""
	I1213 10:29:44.889162  941476 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1213 10:29:44.889169  941476 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1213 10:29:44.889177  941476 command_runner.go:130] > # automatically pick up the changes.
	I1213 10:29:44.889181  941476 command_runner.go:130] > # stream_tls_ca = ""
	I1213 10:29:44.889197  941476 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1213 10:29:44.889202  941476 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1213 10:29:44.889209  941476 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1213 10:29:44.889214  941476 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1213 10:29:44.889220  941476 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1213 10:29:44.889225  941476 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1213 10:29:44.889229  941476 command_runner.go:130] > [crio.runtime]
	I1213 10:29:44.889235  941476 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1213 10:29:44.889240  941476 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1213 10:29:44.889244  941476 command_runner.go:130] > # "nofile=1024:2048"
	I1213 10:29:44.889253  941476 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1213 10:29:44.889257  941476 command_runner.go:130] > # default_ulimits = [
	I1213 10:29:44.889260  941476 command_runner.go:130] > # ]
	I1213 10:29:44.889265  941476 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1213 10:29:44.889269  941476 command_runner.go:130] > # no_pivot = false
	I1213 10:29:44.889274  941476 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1213 10:29:44.889280  941476 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1213 10:29:44.889285  941476 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1213 10:29:44.889291  941476 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1213 10:29:44.889296  941476 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1213 10:29:44.889318  941476 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1213 10:29:44.889322  941476 command_runner.go:130] > # conmon = ""
	I1213 10:29:44.889327  941476 command_runner.go:130] > # Cgroup setting for conmon
	I1213 10:29:44.889333  941476 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1213 10:29:44.889512  941476 command_runner.go:130] > conmon_cgroup = "pod"
	I1213 10:29:44.889563  941476 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1213 10:29:44.889585  941476 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1213 10:29:44.889610  941476 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1213 10:29:44.889647  941476 command_runner.go:130] > # conmon_env = [
	I1213 10:29:44.889671  941476 command_runner.go:130] > # ]
	I1213 10:29:44.889696  941476 command_runner.go:130] > # Additional environment variables to set for all the
	I1213 10:29:44.889721  941476 command_runner.go:130] > # containers. These are overridden if set in the
	I1213 10:29:44.889753  941476 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1213 10:29:44.889776  941476 command_runner.go:130] > # default_env = [
	I1213 10:29:44.889797  941476 command_runner.go:130] > # ]
	I1213 10:29:44.889822  941476 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1213 10:29:44.889858  941476 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1213 10:29:44.889885  941476 command_runner.go:130] > # selinux = false
	I1213 10:29:44.889906  941476 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1213 10:29:44.889932  941476 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1213 10:29:44.889962  941476 command_runner.go:130] > # This option supports live configuration reload.
	I1213 10:29:44.889985  941476 command_runner.go:130] > # seccomp_profile = ""
	I1213 10:29:44.890009  941476 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1213 10:29:44.890029  941476 command_runner.go:130] > # This option supports live configuration reload.
	I1213 10:29:44.890061  941476 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1213 10:29:44.890087  941476 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1213 10:29:44.890109  941476 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1213 10:29:44.890133  941476 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1213 10:29:44.890166  941476 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1213 10:29:44.890191  941476 command_runner.go:130] > # This option supports live configuration reload.
	I1213 10:29:44.890212  941476 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1213 10:29:44.890236  941476 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1213 10:29:44.890284  941476 command_runner.go:130] > # the cgroup blockio controller.
	I1213 10:29:44.890307  941476 command_runner.go:130] > # blockio_config_file = ""
	I1213 10:29:44.890329  941476 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1213 10:29:44.890350  941476 command_runner.go:130] > # blockio parameters.
	I1213 10:29:44.890409  941476 command_runner.go:130] > # blockio_reload = false
	I1213 10:29:44.890437  941476 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1213 10:29:44.890458  941476 command_runner.go:130] > # irqbalance daemon.
	I1213 10:29:44.890483  941476 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1213 10:29:44.890515  941476 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1213 10:29:44.890551  941476 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1213 10:29:44.890575  941476 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1213 10:29:44.890599  941476 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1213 10:29:44.890631  941476 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1213 10:29:44.890655  941476 command_runner.go:130] > # This option supports live configuration reload.
	I1213 10:29:44.890676  941476 command_runner.go:130] > # rdt_config_file = ""
	I1213 10:29:44.890716  941476 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1213 10:29:44.890743  941476 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1213 10:29:44.890767  941476 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1213 10:29:44.890788  941476 command_runner.go:130] > # separate_pull_cgroup = ""
	I1213 10:29:44.890824  941476 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1213 10:29:44.890863  941476 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1213 10:29:44.890886  941476 command_runner.go:130] > # will be added.
	I1213 10:29:44.890904  941476 command_runner.go:130] > # default_capabilities = [
	I1213 10:29:44.890932  941476 command_runner.go:130] > # 	"CHOWN",
	I1213 10:29:44.890957  941476 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1213 10:29:44.891256  941476 command_runner.go:130] > # 	"FSETID",
	I1213 10:29:44.891291  941476 command_runner.go:130] > # 	"FOWNER",
	I1213 10:29:44.891318  941476 command_runner.go:130] > # 	"SETGID",
	I1213 10:29:44.891335  941476 command_runner.go:130] > # 	"SETUID",
	I1213 10:29:44.891390  941476 command_runner.go:130] > # 	"SETPCAP",
	I1213 10:29:44.891416  941476 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1213 10:29:44.891438  941476 command_runner.go:130] > # 	"KILL",
	I1213 10:29:44.891461  941476 command_runner.go:130] > # ]
	I1213 10:29:44.891498  941476 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1213 10:29:44.891527  941476 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1213 10:29:44.891550  941476 command_runner.go:130] > # add_inheritable_capabilities = false
	I1213 10:29:44.891572  941476 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1213 10:29:44.891606  941476 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1213 10:29:44.891629  941476 command_runner.go:130] > default_sysctls = [
	I1213 10:29:44.891651  941476 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1213 10:29:44.891671  941476 command_runner.go:130] > ]
	I1213 10:29:44.891705  941476 command_runner.go:130] > # List of devices on the host that a
	I1213 10:29:44.891730  941476 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1213 10:29:44.891749  941476 command_runner.go:130] > # allowed_devices = [
	I1213 10:29:44.891779  941476 command_runner.go:130] > # 	"/dev/fuse",
	I1213 10:29:44.891809  941476 command_runner.go:130] > # 	"/dev/net/tun",
	I1213 10:29:44.891834  941476 command_runner.go:130] > # ]
	I1213 10:29:44.891856  941476 command_runner.go:130] > # List of additional devices. specified as
	I1213 10:29:44.891880  941476 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1213 10:29:44.891914  941476 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1213 10:29:44.891940  941476 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1213 10:29:44.891962  941476 command_runner.go:130] > # additional_devices = [
	I1213 10:29:44.891983  941476 command_runner.go:130] > # ]
	I1213 10:29:44.892017  941476 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1213 10:29:44.892041  941476 command_runner.go:130] > # cdi_spec_dirs = [
	I1213 10:29:44.892063  941476 command_runner.go:130] > # 	"/etc/cdi",
	I1213 10:29:44.892082  941476 command_runner.go:130] > # 	"/var/run/cdi",
	I1213 10:29:44.892103  941476 command_runner.go:130] > # ]
	I1213 10:29:44.892139  941476 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1213 10:29:44.892161  941476 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1213 10:29:44.892183  941476 command_runner.go:130] > # Defaults to false.
	I1213 10:29:44.892215  941476 command_runner.go:130] > # device_ownership_from_security_context = false
	I1213 10:29:44.892243  941476 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1213 10:29:44.892267  941476 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1213 10:29:44.892287  941476 command_runner.go:130] > # hooks_dir = [
	I1213 10:29:44.892324  941476 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1213 10:29:44.892349  941476 command_runner.go:130] > # ]
	I1213 10:29:44.892371  941476 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1213 10:29:44.892394  941476 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1213 10:29:44.892427  941476 command_runner.go:130] > # its default mounts from the following two files:
	I1213 10:29:44.892450  941476 command_runner.go:130] > #
	I1213 10:29:44.892472  941476 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1213 10:29:44.892496  941476 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1213 10:29:44.892529  941476 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1213 10:29:44.892555  941476 command_runner.go:130] > #
	I1213 10:29:44.892582  941476 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1213 10:29:44.892608  941476 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1213 10:29:44.892654  941476 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1213 10:29:44.892680  941476 command_runner.go:130] > #      only add mounts it finds in this file.
	I1213 10:29:44.892700  941476 command_runner.go:130] > #
	I1213 10:29:44.892722  941476 command_runner.go:130] > # default_mounts_file = ""
	I1213 10:29:44.892742  941476 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1213 10:29:44.892779  941476 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1213 10:29:44.892797  941476 command_runner.go:130] > # pids_limit = -1
	I1213 10:29:44.892825  941476 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1213 10:29:44.892860  941476 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1213 10:29:44.892886  941476 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1213 10:29:44.892912  941476 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1213 10:29:44.892937  941476 command_runner.go:130] > # log_size_max = -1
	I1213 10:29:44.892967  941476 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1213 10:29:44.892992  941476 command_runner.go:130] > # log_to_journald = false
	I1213 10:29:44.893016  941476 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1213 10:29:44.893040  941476 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1213 10:29:44.893073  941476 command_runner.go:130] > # Path to directory for container attach sockets.
	I1213 10:29:44.893097  941476 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1213 10:29:44.893118  941476 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1213 10:29:44.893142  941476 command_runner.go:130] > # bind_mount_prefix = ""
	I1213 10:29:44.893174  941476 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1213 10:29:44.893198  941476 command_runner.go:130] > # read_only = false
	I1213 10:29:44.893223  941476 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1213 10:29:44.893245  941476 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1213 10:29:44.893278  941476 command_runner.go:130] > # live configuration reload.
	I1213 10:29:44.893302  941476 command_runner.go:130] > # log_level = "info"
	I1213 10:29:44.893331  941476 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1213 10:29:44.893353  941476 command_runner.go:130] > # This option supports live configuration reload.
	I1213 10:29:44.893380  941476 command_runner.go:130] > # log_filter = ""
	I1213 10:29:44.893406  941476 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1213 10:29:44.893430  941476 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1213 10:29:44.893452  941476 command_runner.go:130] > # separated by comma.
	I1213 10:29:44.893486  941476 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1213 10:29:44.893520  941476 command_runner.go:130] > # uid_mappings = ""
	I1213 10:29:44.893564  941476 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1213 10:29:44.893593  941476 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1213 10:29:44.893617  941476 command_runner.go:130] > # separated by comma.
	I1213 10:29:44.893643  941476 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1213 10:29:44.893997  941476 command_runner.go:130] > # gid_mappings = ""
	I1213 10:29:44.894010  941476 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1213 10:29:44.894017  941476 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1213 10:29:44.894024  941476 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1213 10:29:44.894032  941476 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1213 10:29:44.894037  941476 command_runner.go:130] > # minimum_mappable_uid = -1
	I1213 10:29:44.894043  941476 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1213 10:29:44.894050  941476 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1213 10:29:44.894056  941476 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1213 10:29:44.894064  941476 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1213 10:29:44.894068  941476 command_runner.go:130] > # minimum_mappable_gid = -1
	I1213 10:29:44.894074  941476 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1213 10:29:44.894081  941476 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1213 10:29:44.894086  941476 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1213 10:29:44.894090  941476 command_runner.go:130] > # ctr_stop_timeout = 30
	I1213 10:29:44.894096  941476 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1213 10:29:44.894102  941476 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1213 10:29:44.894107  941476 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1213 10:29:44.894111  941476 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1213 10:29:44.894115  941476 command_runner.go:130] > # drop_infra_ctr = true
	I1213 10:29:44.894121  941476 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1213 10:29:44.894127  941476 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1213 10:29:44.894135  941476 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1213 10:29:44.894141  941476 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1213 10:29:44.894149  941476 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1213 10:29:44.894155  941476 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1213 10:29:44.894160  941476 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1213 10:29:44.894165  941476 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1213 10:29:44.894173  941476 command_runner.go:130] > # shared_cpuset = ""
	I1213 10:29:44.894179  941476 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1213 10:29:44.894184  941476 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1213 10:29:44.894188  941476 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1213 10:29:44.894195  941476 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1213 10:29:44.894199  941476 command_runner.go:130] > # pinns_path = ""
	I1213 10:29:44.894204  941476 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1213 10:29:44.894210  941476 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1213 10:29:44.894216  941476 command_runner.go:130] > # enable_criu_support = true
	I1213 10:29:44.894223  941476 command_runner.go:130] > # Enable/disable the generation of the container,
	I1213 10:29:44.894229  941476 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1213 10:29:44.894234  941476 command_runner.go:130] > # enable_pod_events = false
	I1213 10:29:44.894240  941476 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1213 10:29:44.894245  941476 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1213 10:29:44.894249  941476 command_runner.go:130] > # default_runtime = "crun"
	I1213 10:29:44.894254  941476 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1213 10:29:44.894261  941476 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1213 10:29:44.894271  941476 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1213 10:29:44.894276  941476 command_runner.go:130] > # creation as a file is not desired either.
	I1213 10:29:44.894284  941476 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1213 10:29:44.894289  941476 command_runner.go:130] > # the hostname is being managed dynamically.
	I1213 10:29:44.894293  941476 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1213 10:29:44.894297  941476 command_runner.go:130] > # ]
	I1213 10:29:44.894303  941476 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1213 10:29:44.894309  941476 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1213 10:29:44.894316  941476 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1213 10:29:44.894321  941476 command_runner.go:130] > # Each entry in the table should follow the format:
	I1213 10:29:44.894324  941476 command_runner.go:130] > #
	I1213 10:29:44.894329  941476 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1213 10:29:44.894333  941476 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1213 10:29:44.894337  941476 command_runner.go:130] > # runtime_type = "oci"
	I1213 10:29:44.894342  941476 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1213 10:29:44.894348  941476 command_runner.go:130] > # inherit_default_runtime = false
	I1213 10:29:44.894367  941476 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1213 10:29:44.894372  941476 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1213 10:29:44.894377  941476 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1213 10:29:44.894381  941476 command_runner.go:130] > # monitor_env = []
	I1213 10:29:44.894386  941476 command_runner.go:130] > # privileged_without_host_devices = false
	I1213 10:29:44.894390  941476 command_runner.go:130] > # allowed_annotations = []
	I1213 10:29:44.894395  941476 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1213 10:29:44.894399  941476 command_runner.go:130] > # no_sync_log = false
	I1213 10:29:44.894403  941476 command_runner.go:130] > # default_annotations = {}
	I1213 10:29:44.894407  941476 command_runner.go:130] > # stream_websockets = false
	I1213 10:29:44.894411  941476 command_runner.go:130] > # seccomp_profile = ""
	I1213 10:29:44.894442  941476 command_runner.go:130] > # Where:
	I1213 10:29:44.894448  941476 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1213 10:29:44.894454  941476 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1213 10:29:44.894461  941476 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1213 10:29:44.894468  941476 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1213 10:29:44.894471  941476 command_runner.go:130] > #   in $PATH.
	I1213 10:29:44.894478  941476 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1213 10:29:44.894482  941476 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1213 10:29:44.894488  941476 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1213 10:29:44.894492  941476 command_runner.go:130] > #   state.
	I1213 10:29:44.894498  941476 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1213 10:29:44.894504  941476 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1213 10:29:44.894510  941476 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1213 10:29:44.894516  941476 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1213 10:29:44.894521  941476 command_runner.go:130] > #   the values from the default runtime on load time.
	I1213 10:29:44.894527  941476 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1213 10:29:44.894533  941476 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1213 10:29:44.894539  941476 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1213 10:29:44.894545  941476 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1213 10:29:44.894550  941476 command_runner.go:130] > #   The currently recognized values are:
	I1213 10:29:44.894557  941476 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1213 10:29:44.894564  941476 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1213 10:29:44.894574  941476 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1213 10:29:44.894580  941476 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1213 10:29:44.894588  941476 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1213 10:29:44.894596  941476 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1213 10:29:44.894602  941476 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1213 10:29:44.894608  941476 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1213 10:29:44.894614  941476 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1213 10:29:44.894621  941476 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1213 10:29:44.894628  941476 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1213 10:29:44.894634  941476 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1213 10:29:44.894640  941476 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1213 10:29:44.894646  941476 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1213 10:29:44.894652  941476 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1213 10:29:44.894661  941476 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1213 10:29:44.894667  941476 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1213 10:29:44.894672  941476 command_runner.go:130] > #   deprecated option "conmon".
	I1213 10:29:44.894679  941476 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1213 10:29:44.894684  941476 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1213 10:29:44.894691  941476 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1213 10:29:44.894695  941476 command_runner.go:130] > #   should be moved to the container's cgroup
	I1213 10:29:44.894702  941476 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1213 10:29:44.894707  941476 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1213 10:29:44.894714  941476 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1213 10:29:44.894718  941476 command_runner.go:130] > #   conmon-rs by using:
	I1213 10:29:44.894726  941476 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1213 10:29:44.894734  941476 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1213 10:29:44.894742  941476 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1213 10:29:44.894748  941476 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1213 10:29:44.894753  941476 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1213 10:29:44.894760  941476 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1213 10:29:44.894768  941476 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1213 10:29:44.894774  941476 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1213 10:29:44.894782  941476 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1213 10:29:44.894794  941476 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1213 10:29:44.894798  941476 command_runner.go:130] > #   when a machine crash happens.
	I1213 10:29:44.894805  941476 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1213 10:29:44.894813  941476 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1213 10:29:44.894821  941476 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1213 10:29:44.894825  941476 command_runner.go:130] > #   seccomp profile for the runtime.
	I1213 10:29:44.894838  941476 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1213 10:29:44.894848  941476 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1213 10:29:44.894851  941476 command_runner.go:130] > #
	I1213 10:29:44.894855  941476 command_runner.go:130] > # Using the seccomp notifier feature:
	I1213 10:29:44.894859  941476 command_runner.go:130] > #
	I1213 10:29:44.894866  941476 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1213 10:29:44.894872  941476 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1213 10:29:44.894878  941476 command_runner.go:130] > #
	I1213 10:29:44.894887  941476 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1213 10:29:44.894893  941476 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1213 10:29:44.894896  941476 command_runner.go:130] > #
	I1213 10:29:44.894903  941476 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1213 10:29:44.894906  941476 command_runner.go:130] > # feature.
	I1213 10:29:44.894909  941476 command_runner.go:130] > #
	I1213 10:29:44.894914  941476 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1213 10:29:44.894921  941476 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1213 10:29:44.894927  941476 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1213 10:29:44.894933  941476 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1213 10:29:44.894939  941476 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1213 10:29:44.894942  941476 command_runner.go:130] > #
	I1213 10:29:44.894948  941476 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1213 10:29:44.894954  941476 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1213 10:29:44.894957  941476 command_runner.go:130] > #
	I1213 10:29:44.894963  941476 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1213 10:29:44.894968  941476 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1213 10:29:44.894971  941476 command_runner.go:130] > #
	I1213 10:29:44.894977  941476 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1213 10:29:44.894987  941476 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1213 10:29:44.894991  941476 command_runner.go:130] > # limitation.
	I1213 10:29:44.894995  941476 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1213 10:29:44.895000  941476 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1213 10:29:44.895004  941476 command_runner.go:130] > runtime_type = ""
	I1213 10:29:44.895008  941476 command_runner.go:130] > runtime_root = "/run/crun"
	I1213 10:29:44.895013  941476 command_runner.go:130] > inherit_default_runtime = false
	I1213 10:29:44.895016  941476 command_runner.go:130] > runtime_config_path = ""
	I1213 10:29:44.895020  941476 command_runner.go:130] > container_min_memory = ""
	I1213 10:29:44.895025  941476 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1213 10:29:44.895028  941476 command_runner.go:130] > monitor_cgroup = "pod"
	I1213 10:29:44.895032  941476 command_runner.go:130] > monitor_exec_cgroup = ""
	I1213 10:29:44.895036  941476 command_runner.go:130] > allowed_annotations = [
	I1213 10:29:44.895040  941476 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1213 10:29:44.895043  941476 command_runner.go:130] > ]
	I1213 10:29:44.895047  941476 command_runner.go:130] > privileged_without_host_devices = false
	I1213 10:29:44.895051  941476 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1213 10:29:44.895056  941476 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1213 10:29:44.895059  941476 command_runner.go:130] > runtime_type = ""
	I1213 10:29:44.895064  941476 command_runner.go:130] > runtime_root = "/run/runc"
	I1213 10:29:44.895069  941476 command_runner.go:130] > inherit_default_runtime = false
	I1213 10:29:44.895072  941476 command_runner.go:130] > runtime_config_path = ""
	I1213 10:29:44.895076  941476 command_runner.go:130] > container_min_memory = ""
	I1213 10:29:44.895081  941476 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1213 10:29:44.895084  941476 command_runner.go:130] > monitor_cgroup = "pod"
	I1213 10:29:44.895089  941476 command_runner.go:130] > monitor_exec_cgroup = ""
	I1213 10:29:44.895093  941476 command_runner.go:130] > privileged_without_host_devices = false
	I1213 10:29:44.895100  941476 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1213 10:29:44.895105  941476 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1213 10:29:44.895111  941476 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1213 10:29:44.895119  941476 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1213 10:29:44.895129  941476 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1213 10:29:44.895139  941476 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1213 10:29:44.895151  941476 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1213 10:29:44.895156  941476 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1213 10:29:44.895166  941476 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1213 10:29:44.895174  941476 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1213 10:29:44.895181  941476 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1213 10:29:44.895188  941476 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1213 10:29:44.895191  941476 command_runner.go:130] > # Example:
	I1213 10:29:44.895196  941476 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1213 10:29:44.895201  941476 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1213 10:29:44.895207  941476 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1213 10:29:44.895212  941476 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1213 10:29:44.895216  941476 command_runner.go:130] > # cpuset = "0-1"
	I1213 10:29:44.895219  941476 command_runner.go:130] > # cpushares = "5"
	I1213 10:29:44.895223  941476 command_runner.go:130] > # cpuquota = "1000"
	I1213 10:29:44.895227  941476 command_runner.go:130] > # cpuperiod = "100000"
	I1213 10:29:44.895230  941476 command_runner.go:130] > # cpulimit = "35"
	I1213 10:29:44.895234  941476 command_runner.go:130] > # Where:
	I1213 10:29:44.895238  941476 command_runner.go:130] > # The workload name is workload-type.
	I1213 10:29:44.895245  941476 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1213 10:29:44.895250  941476 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1213 10:29:44.895259  941476 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1213 10:29:44.895267  941476 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1213 10:29:44.895274  941476 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1213 10:29:44.895279  941476 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1213 10:29:44.895286  941476 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1213 10:29:44.895290  941476 command_runner.go:130] > # Default value is set to true
	I1213 10:29:44.895294  941476 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1213 10:29:44.895300  941476 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1213 10:29:44.895305  941476 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1213 10:29:44.895309  941476 command_runner.go:130] > # Default value is set to 'false'
	I1213 10:29:44.895313  941476 command_runner.go:130] > # disable_hostport_mapping = false
	I1213 10:29:44.895318  941476 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1213 10:29:44.895326  941476 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1213 10:29:44.895334  941476 command_runner.go:130] > # timezone = ""
	I1213 10:29:44.895341  941476 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1213 10:29:44.895343  941476 command_runner.go:130] > #
	I1213 10:29:44.895349  941476 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1213 10:29:44.895355  941476 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1213 10:29:44.895358  941476 command_runner.go:130] > [crio.image]
	I1213 10:29:44.895364  941476 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1213 10:29:44.895368  941476 command_runner.go:130] > # default_transport = "docker://"
	I1213 10:29:44.895373  941476 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1213 10:29:44.895380  941476 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1213 10:29:44.895383  941476 command_runner.go:130] > # global_auth_file = ""
	I1213 10:29:44.895388  941476 command_runner.go:130] > # The image used to instantiate infra containers.
	I1213 10:29:44.895393  941476 command_runner.go:130] > # This option supports live configuration reload.
	I1213 10:29:44.895398  941476 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1213 10:29:44.895404  941476 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1213 10:29:44.895412  941476 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1213 10:29:44.895417  941476 command_runner.go:130] > # This option supports live configuration reload.
	I1213 10:29:44.895420  941476 command_runner.go:130] > # pause_image_auth_file = ""
	I1213 10:29:44.895426  941476 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1213 10:29:44.895432  941476 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1213 10:29:44.895438  941476 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1213 10:29:44.895444  941476 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1213 10:29:44.895448  941476 command_runner.go:130] > # pause_command = "/pause"
	I1213 10:29:44.895454  941476 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1213 10:29:44.895460  941476 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1213 10:29:44.895467  941476 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1213 10:29:44.895473  941476 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1213 10:29:44.895479  941476 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1213 10:29:44.895485  941476 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1213 10:29:44.895488  941476 command_runner.go:130] > # pinned_images = [
	I1213 10:29:44.895491  941476 command_runner.go:130] > # ]
	I1213 10:29:44.895497  941476 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1213 10:29:44.895503  941476 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1213 10:29:44.895512  941476 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1213 10:29:44.895519  941476 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1213 10:29:44.895524  941476 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1213 10:29:44.895529  941476 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1213 10:29:44.895534  941476 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1213 10:29:44.895540  941476 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1213 10:29:44.895547  941476 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1213 10:29:44.895554  941476 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1213 10:29:44.895559  941476 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1213 10:29:44.895564  941476 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1213 10:29:44.895570  941476 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1213 10:29:44.895576  941476 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1213 10:29:44.895580  941476 command_runner.go:130] > # changing them here.
	I1213 10:29:44.895586  941476 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1213 10:29:44.895590  941476 command_runner.go:130] > # insecure_registries = [
	I1213 10:29:44.895592  941476 command_runner.go:130] > # ]
	I1213 10:29:44.895598  941476 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1213 10:29:44.895603  941476 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1213 10:29:44.895609  941476 command_runner.go:130] > # image_volumes = "mkdir"
	I1213 10:29:44.895614  941476 command_runner.go:130] > # Temporary directory to use for storing big files
	I1213 10:29:44.895618  941476 command_runner.go:130] > # big_files_temporary_dir = ""
	I1213 10:29:44.895623  941476 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1213 10:29:44.895630  941476 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1213 10:29:44.895634  941476 command_runner.go:130] > # auto_reload_registries = false
	I1213 10:29:44.895641  941476 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1213 10:29:44.895651  941476 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1213 10:29:44.895657  941476 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1213 10:29:44.895662  941476 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1213 10:29:44.895666  941476 command_runner.go:130] > # The mode of short name resolution.
	I1213 10:29:44.895672  941476 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1213 10:29:44.895679  941476 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1213 10:29:44.895684  941476 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1213 10:29:44.895688  941476 command_runner.go:130] > # short_name_mode = "enforcing"
	I1213 10:29:44.895697  941476 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1213 10:29:44.895704  941476 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1213 10:29:44.895708  941476 command_runner.go:130] > # oci_artifact_mount_support = true
	I1213 10:29:44.895715  941476 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1213 10:29:44.895718  941476 command_runner.go:130] > # CNI plugins.
	I1213 10:29:44.895721  941476 command_runner.go:130] > [crio.network]
	I1213 10:29:44.895727  941476 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1213 10:29:44.895732  941476 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1213 10:29:44.895735  941476 command_runner.go:130] > # cni_default_network = ""
	I1213 10:29:44.895741  941476 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1213 10:29:44.895745  941476 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1213 10:29:44.895751  941476 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1213 10:29:44.895754  941476 command_runner.go:130] > # plugin_dirs = [
	I1213 10:29:44.895758  941476 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1213 10:29:44.895760  941476 command_runner.go:130] > # ]
	I1213 10:29:44.895764  941476 command_runner.go:130] > # List of included pod metrics.
	I1213 10:29:44.895768  941476 command_runner.go:130] > # included_pod_metrics = [
	I1213 10:29:44.895771  941476 command_runner.go:130] > # ]
	I1213 10:29:44.895778  941476 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1213 10:29:44.895781  941476 command_runner.go:130] > [crio.metrics]
	I1213 10:29:44.895786  941476 command_runner.go:130] > # Globally enable or disable metrics support.
	I1213 10:29:44.895790  941476 command_runner.go:130] > # enable_metrics = false
	I1213 10:29:44.895794  941476 command_runner.go:130] > # Specify enabled metrics collectors.
	I1213 10:29:44.895799  941476 command_runner.go:130] > # Per default all metrics are enabled.
	I1213 10:29:44.895805  941476 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1213 10:29:44.895813  941476 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1213 10:29:44.895818  941476 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1213 10:29:44.895822  941476 command_runner.go:130] > # metrics_collectors = [
	I1213 10:29:44.895826  941476 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1213 10:29:44.895831  941476 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1213 10:29:44.895834  941476 command_runner.go:130] > # 	"containers_oom_total",
	I1213 10:29:44.895838  941476 command_runner.go:130] > # 	"processes_defunct",
	I1213 10:29:44.895842  941476 command_runner.go:130] > # 	"operations_total",
	I1213 10:29:44.895849  941476 command_runner.go:130] > # 	"operations_latency_seconds",
	I1213 10:29:44.895854  941476 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1213 10:29:44.895859  941476 command_runner.go:130] > # 	"operations_errors_total",
	I1213 10:29:44.895863  941476 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1213 10:29:44.895867  941476 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1213 10:29:44.895871  941476 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1213 10:29:44.895875  941476 command_runner.go:130] > # 	"image_pulls_success_total",
	I1213 10:29:44.895879  941476 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1213 10:29:44.895883  941476 command_runner.go:130] > # 	"containers_oom_count_total",
	I1213 10:29:44.895888  941476 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1213 10:29:44.895892  941476 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1213 10:29:44.895896  941476 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1213 10:29:44.895899  941476 command_runner.go:130] > # ]
	I1213 10:29:44.895905  941476 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1213 10:29:44.895908  941476 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1213 10:29:44.895913  941476 command_runner.go:130] > # The port on which the metrics server will listen.
	I1213 10:29:44.895917  941476 command_runner.go:130] > # metrics_port = 9090
	I1213 10:29:44.895922  941476 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1213 10:29:44.895925  941476 command_runner.go:130] > # metrics_socket = ""
	I1213 10:29:44.895930  941476 command_runner.go:130] > # The certificate for the secure metrics server.
	I1213 10:29:44.895937  941476 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1213 10:29:44.895943  941476 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1213 10:29:44.895947  941476 command_runner.go:130] > # certificate on any modification event.
	I1213 10:29:44.895951  941476 command_runner.go:130] > # metrics_cert = ""
	I1213 10:29:44.895955  941476 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1213 10:29:44.895960  941476 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1213 10:29:44.895963  941476 command_runner.go:130] > # metrics_key = ""
	I1213 10:29:44.895969  941476 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1213 10:29:44.895972  941476 command_runner.go:130] > [crio.tracing]
	I1213 10:29:44.895978  941476 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1213 10:29:44.895981  941476 command_runner.go:130] > # enable_tracing = false
	I1213 10:29:44.895987  941476 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1213 10:29:44.895991  941476 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1213 10:29:44.896000  941476 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1213 10:29:44.896007  941476 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1213 10:29:44.896011  941476 command_runner.go:130] > # CRI-O NRI configuration.
	I1213 10:29:44.896014  941476 command_runner.go:130] > [crio.nri]
	I1213 10:29:44.896018  941476 command_runner.go:130] > # Globally enable or disable NRI.
	I1213 10:29:44.896022  941476 command_runner.go:130] > # enable_nri = true
	I1213 10:29:44.896025  941476 command_runner.go:130] > # NRI socket to listen on.
	I1213 10:29:44.896030  941476 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1213 10:29:44.896034  941476 command_runner.go:130] > # NRI plugin directory to use.
	I1213 10:29:44.896038  941476 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1213 10:29:44.896043  941476 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1213 10:29:44.896051  941476 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1213 10:29:44.896057  941476 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1213 10:29:44.896113  941476 command_runner.go:130] > # nri_disable_connections = false
	I1213 10:29:44.896119  941476 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1213 10:29:44.896123  941476 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1213 10:29:44.896128  941476 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1213 10:29:44.896133  941476 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1213 10:29:44.896137  941476 command_runner.go:130] > # NRI default validator configuration.
	I1213 10:29:44.896144  941476 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1213 10:29:44.896150  941476 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1213 10:29:44.896155  941476 command_runner.go:130] > # can be restricted/rejected:
	I1213 10:29:44.896158  941476 command_runner.go:130] > # - OCI hook injection
	I1213 10:29:44.896163  941476 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1213 10:29:44.896167  941476 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1213 10:29:44.896172  941476 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1213 10:29:44.896176  941476 command_runner.go:130] > # - adjustment of linux namespaces
	I1213 10:29:44.896186  941476 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1213 10:29:44.896193  941476 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1213 10:29:44.896198  941476 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1213 10:29:44.896201  941476 command_runner.go:130] > #
	I1213 10:29:44.896205  941476 command_runner.go:130] > # [crio.nri.default_validator]
	I1213 10:29:44.896209  941476 command_runner.go:130] > # nri_enable_default_validator = false
	I1213 10:29:44.896218  941476 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1213 10:29:44.896223  941476 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1213 10:29:44.896229  941476 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1213 10:29:44.896234  941476 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1213 10:29:44.896239  941476 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1213 10:29:44.896243  941476 command_runner.go:130] > # nri_validator_required_plugins = [
	I1213 10:29:44.896245  941476 command_runner.go:130] > # ]
	I1213 10:29:44.896251  941476 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1213 10:29:44.896257  941476 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1213 10:29:44.896261  941476 command_runner.go:130] > [crio.stats]
	I1213 10:29:44.896267  941476 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1213 10:29:44.896272  941476 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1213 10:29:44.896276  941476 command_runner.go:130] > # stats_collection_period = 0
	I1213 10:29:44.896281  941476 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1213 10:29:44.896287  941476 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1213 10:29:44.896291  941476 command_runner.go:130] > # collection_period = 0
	I1213 10:29:44.896753  941476 command_runner.go:130] ! time="2025-12-13T10:29:44.865564739Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1213 10:29:44.896774  941476 command_runner.go:130] ! time="2025-12-13T10:29:44.865608538Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1213 10:29:44.896784  941476 command_runner.go:130] ! time="2025-12-13T10:29:44.865641285Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1213 10:29:44.896793  941476 command_runner.go:130] ! time="2025-12-13T10:29:44.86566636Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1213 10:29:44.896803  941476 command_runner.go:130] ! time="2025-12-13T10:29:44.865746328Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:29:44.896812  941476 command_runner.go:130] ! time="2025-12-13T10:29:44.866102466Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1213 10:29:44.896826  941476 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1213 10:29:44.896949  941476 cni.go:84] Creating CNI manager for ""
	I1213 10:29:44.896967  941476 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 10:29:44.896990  941476 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 10:29:44.897016  941476 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-200955 NodeName:functional-200955 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 10:29:44.897147  941476 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-200955"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 10:29:44.897221  941476 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 10:29:44.904800  941476 command_runner.go:130] > kubeadm
	I1213 10:29:44.904821  941476 command_runner.go:130] > kubectl
	I1213 10:29:44.904825  941476 command_runner.go:130] > kubelet
	I1213 10:29:44.905083  941476 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 10:29:44.905149  941476 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 10:29:44.912855  941476 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1213 10:29:44.926542  941476 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 10:29:44.940018  941476 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1213 10:29:44.953058  941476 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1213 10:29:44.956927  941476 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1213 10:29:44.957067  941476 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:29:45.090811  941476 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 10:29:45.111343  941476 certs.go:69] Setting up /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955 for IP: 192.168.49.2
	I1213 10:29:45.111425  941476 certs.go:195] generating shared ca certs ...
	I1213 10:29:45.111459  941476 certs.go:227] acquiring lock for ca certs: {Name:mk8a4f8a0a31c02fdf751ce601bdbbea6f5a03e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:29:45.111653  941476 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22128-904040/.minikube/ca.key
	I1213 10:29:45.111736  941476 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22128-904040/.minikube/proxy-client-ca.key
	I1213 10:29:45.111762  941476 certs.go:257] generating profile certs ...
	I1213 10:29:45.111936  941476 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/client.key
	I1213 10:29:45.112043  941476 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/apiserver.key.8da389ed
	I1213 10:29:45.112141  941476 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/proxy-client.key
	I1213 10:29:45.112183  941476 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-904040/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1213 10:29:45.112222  941476 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-904040/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1213 10:29:45.112262  941476 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-904040/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1213 10:29:45.112293  941476 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-904040/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1213 10:29:45.112328  941476 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1213 10:29:45.112371  941476 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1213 10:29:45.112404  941476 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1213 10:29:45.112444  941476 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1213 10:29:45.112521  941476 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/907484.pem (1338 bytes)
	W1213 10:29:45.112600  941476 certs.go:480] ignoring /home/jenkins/minikube-integration/22128-904040/.minikube/certs/907484_empty.pem, impossibly tiny 0 bytes
	I1213 10:29:45.112629  941476 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 10:29:45.112687  941476 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca.pem (1082 bytes)
	I1213 10:29:45.112733  941476 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/cert.pem (1123 bytes)
	I1213 10:29:45.112831  941476 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/key.pem (1675 bytes)
	I1213 10:29:45.113060  941476 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-904040/.minikube/files/etc/ssl/certs/9074842.pem (1708 bytes)
	I1213 10:29:45.113147  941476 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/907484.pem -> /usr/share/ca-certificates/907484.pem
	I1213 10:29:45.113186  941476 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-904040/.minikube/files/etc/ssl/certs/9074842.pem -> /usr/share/ca-certificates/9074842.pem
	I1213 10:29:45.113227  941476 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-904040/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:29:45.113935  941476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 10:29:45.163864  941476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 10:29:45.189286  941476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 10:29:45.237278  941476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 10:29:45.263467  941476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 10:29:45.289513  941476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 10:29:45.309018  941476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 10:29:45.329141  941476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 10:29:45.347665  941476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/certs/907484.pem --> /usr/share/ca-certificates/907484.pem (1338 bytes)
	I1213 10:29:45.365433  941476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/files/etc/ssl/certs/9074842.pem --> /usr/share/ca-certificates/9074842.pem (1708 bytes)
	I1213 10:29:45.383209  941476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 10:29:45.402144  941476 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 10:29:45.415520  941476 ssh_runner.go:195] Run: openssl version
	I1213 10:29:45.421431  941476 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1213 10:29:45.421939  941476 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/907484.pem
	I1213 10:29:45.429504  941476 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/907484.pem /etc/ssl/certs/907484.pem
	I1213 10:29:45.436991  941476 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/907484.pem
	I1213 10:29:45.440561  941476 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 13 10:21 /usr/share/ca-certificates/907484.pem
	I1213 10:29:45.440796  941476 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 10:21 /usr/share/ca-certificates/907484.pem
	I1213 10:29:45.440864  941476 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/907484.pem
	I1213 10:29:45.483791  941476 command_runner.go:130] > 51391683
	I1213 10:29:45.484209  941476 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 10:29:45.491520  941476 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9074842.pem
	I1213 10:29:45.498932  941476 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9074842.pem /etc/ssl/certs/9074842.pem
	I1213 10:29:45.509018  941476 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9074842.pem
	I1213 10:29:45.513215  941476 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 13 10:21 /usr/share/ca-certificates/9074842.pem
	I1213 10:29:45.513301  941476 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 10:21 /usr/share/ca-certificates/9074842.pem
	I1213 10:29:45.513386  941476 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9074842.pem
	I1213 10:29:45.554662  941476 command_runner.go:130] > 3ec20f2e
	I1213 10:29:45.555104  941476 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 10:29:45.562598  941476 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:29:45.570035  941476 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 10:29:45.578308  941476 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:29:45.582322  941476 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 13 10:11 /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:29:45.582399  941476 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 10:11 /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:29:45.582459  941476 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:29:45.623357  941476 command_runner.go:130] > b5213941
	I1213 10:29:45.623846  941476 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 10:29:45.631423  941476 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 10:29:45.635203  941476 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 10:29:45.635226  941476 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1213 10:29:45.635232  941476 command_runner.go:130] > Device: 259,1	Inode: 1052598     Links: 1
	I1213 10:29:45.635239  941476 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1213 10:29:45.635245  941476 command_runner.go:130] > Access: 2025-12-13 10:25:37.832562674 +0000
	I1213 10:29:45.635250  941476 command_runner.go:130] > Modify: 2025-12-13 10:21:33.766304384 +0000
	I1213 10:29:45.635255  941476 command_runner.go:130] > Change: 2025-12-13 10:21:33.766304384 +0000
	I1213 10:29:45.635260  941476 command_runner.go:130] >  Birth: 2025-12-13 10:21:33.766304384 +0000
	I1213 10:29:45.635337  941476 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 10:29:45.676331  941476 command_runner.go:130] > Certificate will not expire
	I1213 10:29:45.676780  941476 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 10:29:45.719984  941476 command_runner.go:130] > Certificate will not expire
	I1213 10:29:45.720440  941476 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 10:29:45.763044  941476 command_runner.go:130] > Certificate will not expire
	I1213 10:29:45.763152  941476 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 10:29:45.804752  941476 command_runner.go:130] > Certificate will not expire
	I1213 10:29:45.805187  941476 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 10:29:45.846806  941476 command_runner.go:130] > Certificate will not expire
	I1213 10:29:45.847203  941476 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 10:29:45.898203  941476 command_runner.go:130] > Certificate will not expire
	I1213 10:29:45.898680  941476 kubeadm.go:401] StartCluster: {Name:functional-200955 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-200955 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:29:45.898809  941476 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 10:29:45.898933  941476 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 10:29:45.924889  941476 cri.go:89] found id: ""
	I1213 10:29:45.924989  941476 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 10:29:45.932161  941476 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1213 10:29:45.932226  941476 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1213 10:29:45.932248  941476 command_runner.go:130] > /var/lib/minikube/etcd:
	I1213 10:29:45.933123  941476 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 10:29:45.933177  941476 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 10:29:45.933244  941476 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 10:29:45.940638  941476 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 10:29:45.941072  941476 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-200955" does not appear in /home/jenkins/minikube-integration/22128-904040/kubeconfig
	I1213 10:29:45.941185  941476 kubeconfig.go:62] /home/jenkins/minikube-integration/22128-904040/kubeconfig needs updating (will repair): [kubeconfig missing "functional-200955" cluster setting kubeconfig missing "functional-200955" context setting]
	I1213 10:29:45.941452  941476 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-904040/kubeconfig: {Name:mk623f80012ba74b924bdfcf4e2ec5178c2702f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:29:45.941955  941476 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22128-904040/kubeconfig
	I1213 10:29:45.942103  941476 kapi.go:59] client config for functional-200955: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/client.crt", KeyFile:"/home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/client.key", CAFile:"/home/jenkins/minikube-integration/22128-904040/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 10:29:45.942644  941476 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1213 10:29:45.942668  941476 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1213 10:29:45.942678  941476 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1213 10:29:45.942683  941476 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1213 10:29:45.942687  941476 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1213 10:29:45.942727  941476 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1213 10:29:45.943068  941476 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 10:29:45.951089  941476 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1213 10:29:45.951121  941476 kubeadm.go:602] duration metric: took 17.93243ms to restartPrimaryControlPlane
	I1213 10:29:45.951143  941476 kubeadm.go:403] duration metric: took 52.461003ms to StartCluster
	I1213 10:29:45.951159  941476 settings.go:142] acquiring lock: {Name:mk93988d167ba25bb331a8426f9b2f4ef25dd844 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:29:45.951223  941476 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22128-904040/kubeconfig
	I1213 10:29:45.951796  941476 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-904040/kubeconfig: {Name:mk623f80012ba74b924bdfcf4e2ec5178c2702f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:29:45.951989  941476 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 10:29:45.952368  941476 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 10:29:45.952448  941476 addons.go:70] Setting storage-provisioner=true in profile "functional-200955"
	I1213 10:29:45.952463  941476 addons.go:239] Setting addon storage-provisioner=true in "functional-200955"
	I1213 10:29:45.952488  941476 host.go:66] Checking if "functional-200955" exists ...
	I1213 10:29:45.952566  941476 config.go:182] Loaded profile config "functional-200955": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 10:29:45.952610  941476 addons.go:70] Setting default-storageclass=true in profile "functional-200955"
	I1213 10:29:45.952623  941476 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-200955"
	I1213 10:29:45.952911  941476 cli_runner.go:164] Run: docker container inspect functional-200955 --format={{.State.Status}}
	I1213 10:29:45.952951  941476 cli_runner.go:164] Run: docker container inspect functional-200955 --format={{.State.Status}}
	I1213 10:29:45.958523  941476 out.go:179] * Verifying Kubernetes components...
	I1213 10:29:45.963377  941476 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:29:45.989193  941476 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22128-904040/kubeconfig
	I1213 10:29:45.989357  941476 kapi.go:59] client config for functional-200955: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/client.crt", KeyFile:"/home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/client.key", CAFile:"/home/jenkins/minikube-integration/22128-904040/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 10:29:45.989643  941476 addons.go:239] Setting addon default-storageclass=true in "functional-200955"
	I1213 10:29:45.989674  941476 host.go:66] Checking if "functional-200955" exists ...
	I1213 10:29:45.990084  941476 cli_runner.go:164] Run: docker container inspect functional-200955 --format={{.State.Status}}
	I1213 10:29:45.996374  941476 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 10:29:45.999301  941476 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:29:45.999325  941476 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 10:29:45.999389  941476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-200955
	I1213 10:29:46.025120  941476 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 10:29:46.025146  941476 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 10:29:46.025210  941476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-200955
	I1213 10:29:46.047237  941476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33523 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/functional-200955/id_rsa Username:docker}
	I1213 10:29:46.065614  941476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33523 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/functional-200955/id_rsa Username:docker}
	I1213 10:29:46.182514  941476 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 10:29:46.188367  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:29:46.228034  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:29:46.975760  941476 node_ready.go:35] waiting up to 6m0s for node "functional-200955" to be "Ready" ...
	I1213 10:29:46.975884  941476 type.go:168] "Request Body" body=""
	I1213 10:29:46.975940  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:46.976159  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:29:46.976214  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:46.976242  941476 retry.go:31] will retry after 310.714541ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:46.976276  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:29:46.976296  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:46.976306  941476 retry.go:31] will retry after 212.322267ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:46.976367  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:29:47.188794  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:29:47.245508  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:29:47.249207  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:47.249253  941476 retry.go:31] will retry after 232.449188ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:47.287510  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:29:47.352377  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:29:47.355988  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:47.356022  941476 retry.go:31] will retry after 216.845813ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:47.476461  941476 type.go:168] "Request Body" body=""
	I1213 10:29:47.476540  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:47.476866  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:29:47.482125  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:29:47.540633  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:29:47.540674  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:47.540713  941476 retry.go:31] will retry after 621.150122ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:47.573847  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:29:47.632148  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:29:47.632198  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:47.632239  941476 retry.go:31] will retry after 652.105841ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:47.976625  941476 type.go:168] "Request Body" body=""
	I1213 10:29:47.976714  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:47.977047  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:29:48.162374  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:29:48.224014  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:29:48.224050  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:48.224096  941476 retry.go:31] will retry after 486.360631ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:48.285241  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:29:48.341512  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:29:48.345196  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:48.345232  941476 retry.go:31] will retry after 851.054667ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:48.476501  941476 type.go:168] "Request Body" body=""
	I1213 10:29:48.476654  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:48.477264  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:29:48.710766  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:29:48.774597  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:29:48.774656  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:48.774677  941476 retry.go:31] will retry after 1.42902923s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:48.976030  941476 type.go:168] "Request Body" body=""
	I1213 10:29:48.976124  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:48.976473  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:29:48.976568  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:29:49.197102  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:29:49.269601  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:29:49.269709  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:49.269757  941476 retry.go:31] will retry after 1.296706305s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:49.476109  941476 type.go:168] "Request Body" body=""
	I1213 10:29:49.476203  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:49.476573  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:29:49.976081  941476 type.go:168] "Request Body" body=""
	I1213 10:29:49.976179  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:49.976442  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:29:50.204048  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:29:50.263787  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:29:50.263835  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:50.263857  941476 retry.go:31] will retry after 2.257067811s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:50.476081  941476 type.go:168] "Request Body" body=""
	I1213 10:29:50.476171  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:50.476455  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:29:50.566907  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:29:50.629271  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:29:50.629314  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:50.629333  941476 retry.go:31] will retry after 1.765407868s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:50.976841  941476 type.go:168] "Request Body" body=""
	I1213 10:29:50.976923  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:50.977217  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:29:50.977269  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:29:51.475933  941476 type.go:168] "Request Body" body=""
	I1213 10:29:51.476012  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:51.476290  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:29:51.976028  941476 type.go:168] "Request Body" body=""
	I1213 10:29:51.976124  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:51.976454  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:29:52.395020  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:29:52.456823  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:29:52.456875  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:52.456899  941476 retry.go:31] will retry after 1.561909689s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:52.476063  941476 type.go:168] "Request Body" body=""
	I1213 10:29:52.476147  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:52.476449  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:29:52.521915  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:29:52.578203  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:29:52.581870  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:52.581904  941476 retry.go:31] will retry after 3.834800834s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:52.976296  941476 type.go:168] "Request Body" body=""
	I1213 10:29:52.976371  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:52.976640  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:29:53.476019  941476 type.go:168] "Request Body" body=""
	I1213 10:29:53.476103  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:53.476429  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:29:53.476481  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:29:53.976156  941476 type.go:168] "Request Body" body=""
	I1213 10:29:53.976238  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:53.976665  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:29:54.019913  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:29:54.081795  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:29:54.081851  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:54.081875  941476 retry.go:31] will retry after 4.858817388s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:54.476105  941476 type.go:168] "Request Body" body=""
	I1213 10:29:54.476182  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:54.476432  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:29:54.976004  941476 type.go:168] "Request Body" body=""
	I1213 10:29:54.976093  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:54.976415  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:29:55.476129  941476 type.go:168] "Request Body" body=""
	I1213 10:29:55.476226  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:55.476527  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:29:55.476588  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:29:55.976456  941476 type.go:168] "Request Body" body=""
	I1213 10:29:55.976520  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:55.976761  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:29:56.417572  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:29:56.476035  941476 type.go:168] "Request Body" body=""
	I1213 10:29:56.476118  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:56.476423  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:56.476511  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:29:56.480436  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:56.480483  941476 retry.go:31] will retry after 4.792687173s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:56.976051  941476 type.go:168] "Request Body" body=""
	I1213 10:29:56.976145  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:56.976494  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:29:57.475977  941476 type.go:168] "Request Body" body=""
	I1213 10:29:57.476051  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:57.476378  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:29:57.976104  941476 type.go:168] "Request Body" body=""
	I1213 10:29:57.976249  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:57.976601  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:29:57.976655  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:29:58.476178  941476 type.go:168] "Request Body" body=""
	I1213 10:29:58.476277  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:58.476612  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:29:58.940954  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:29:58.976372  941476 type.go:168] "Request Body" body=""
	I1213 10:29:58.976458  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:58.976716  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:29:59.010699  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:29:59.010740  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:59.010759  941476 retry.go:31] will retry after 7.734765537s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:59.476520  941476 type.go:168] "Request Body" body=""
	I1213 10:29:59.476594  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:59.476930  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:29:59.976794  941476 type.go:168] "Request Body" body=""
	I1213 10:29:59.976872  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:59.977198  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:29:59.977252  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:00.476972  941476 type.go:168] "Request Body" body=""
	I1213 10:30:00.477066  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:00.477383  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:00.976114  941476 type.go:168] "Request Body" body=""
	I1213 10:30:00.976196  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:00.976547  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:01.274155  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:30:01.347774  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:30:01.347813  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:30:01.347834  941476 retry.go:31] will retry after 9.325183697s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:30:01.478515  941476 type.go:168] "Request Body" body=""
	I1213 10:30:01.478628  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:01.479014  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:01.976839  941476 type.go:168] "Request Body" body=""
	I1213 10:30:01.976947  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:01.977331  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:01.977404  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:02.476030  941476 type.go:168] "Request Body" body=""
	I1213 10:30:02.476139  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:02.476537  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:02.976170  941476 type.go:168] "Request Body" body=""
	I1213 10:30:02.976275  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:02.976649  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:03.476192  941476 type.go:168] "Request Body" body=""
	I1213 10:30:03.476276  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:03.476538  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:03.976228  941476 type.go:168] "Request Body" body=""
	I1213 10:30:03.976352  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:03.976726  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:04.476318  941476 type.go:168] "Request Body" body=""
	I1213 10:30:04.476410  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:04.476740  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:04.476799  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:04.976561  941476 type.go:168] "Request Body" body=""
	I1213 10:30:04.976631  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:04.976878  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:05.476699  941476 type.go:168] "Request Body" body=""
	I1213 10:30:05.476787  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:05.477120  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:05.977016  941476 type.go:168] "Request Body" body=""
	I1213 10:30:05.977144  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:05.977510  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:06.475991  941476 type.go:168] "Request Body" body=""
	I1213 10:30:06.476060  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:06.476330  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:06.746112  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:30:06.805144  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:30:06.808651  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:30:06.808685  941476 retry.go:31] will retry after 7.088599712s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:30:06.976026  941476 type.go:168] "Request Body" body=""
	I1213 10:30:06.976116  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:06.976437  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:06.976507  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:07.476202  941476 type.go:168] "Request Body" body=""
	I1213 10:30:07.476279  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:07.476634  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:07.976084  941476 type.go:168] "Request Body" body=""
	I1213 10:30:07.976170  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:07.976444  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:08.476024  941476 type.go:168] "Request Body" body=""
	I1213 10:30:08.476153  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:08.476482  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:08.976213  941476 type.go:168] "Request Body" body=""
	I1213 10:30:08.976308  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:08.976642  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:08.976701  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:09.476115  941476 type.go:168] "Request Body" body=""
	I1213 10:30:09.476212  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:09.476464  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:09.976032  941476 type.go:168] "Request Body" body=""
	I1213 10:30:09.976107  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:09.976492  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:10.476265  941476 type.go:168] "Request Body" body=""
	I1213 10:30:10.476368  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:10.476715  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:10.673230  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:30:10.732312  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:30:10.736051  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:30:10.736087  941476 retry.go:31] will retry after 8.123592788s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:30:10.976475  941476 type.go:168] "Request Body" body=""
	I1213 10:30:10.976550  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:10.976847  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:10.976888  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:11.476725  941476 type.go:168] "Request Body" body=""
	I1213 10:30:11.476822  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:11.477169  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:11.976044  941476 type.go:168] "Request Body" body=""
	I1213 10:30:11.976120  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:11.976458  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:12.476202  941476 type.go:168] "Request Body" body=""
	I1213 10:30:12.476278  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:12.476542  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:12.976059  941476 type.go:168] "Request Body" body=""
	I1213 10:30:12.976141  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:12.976473  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:13.476058  941476 type.go:168] "Request Body" body=""
	I1213 10:30:13.476137  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:13.476490  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:13.476548  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:13.898101  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:30:13.964340  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:30:13.967836  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:30:13.967879  941476 retry.go:31] will retry after 8.492520723s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:30:13.975991  941476 type.go:168] "Request Body" body=""
	I1213 10:30:13.976068  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:13.976327  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:14.476033  941476 type.go:168] "Request Body" body=""
	I1213 10:30:14.476139  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:14.476442  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:14.976067  941476 type.go:168] "Request Body" body=""
	I1213 10:30:14.976142  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:14.976454  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:15.475986  941476 type.go:168] "Request Body" body=""
	I1213 10:30:15.476080  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:15.476459  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:15.975941  941476 type.go:168] "Request Body" body=""
	I1213 10:30:15.976026  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:15.976392  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:15.976452  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:16.476065  941476 type.go:168] "Request Body" body=""
	I1213 10:30:16.476159  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:16.476450  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:16.975992  941476 type.go:168] "Request Body" body=""
	I1213 10:30:16.976102  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:16.976412  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:17.476049  941476 type.go:168] "Request Body" body=""
	I1213 10:30:17.476174  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:17.476445  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:17.976100  941476 type.go:168] "Request Body" body=""
	I1213 10:30:17.976180  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:17.976600  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:17.976654  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:18.475986  941476 type.go:168] "Request Body" body=""
	I1213 10:30:18.476079  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:18.476393  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:18.859953  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:30:18.916800  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:30:18.920763  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:30:18.920813  941476 retry.go:31] will retry after 11.17407044s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:30:18.976006  941476 type.go:168] "Request Body" body=""
	I1213 10:30:18.976089  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:18.976434  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:19.476057  941476 type.go:168] "Request Body" body=""
	I1213 10:30:19.476156  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:19.476511  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:19.975977  941476 type.go:168] "Request Body" body=""
	I1213 10:30:19.976055  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:19.976310  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:20.476019  941476 type.go:168] "Request Body" body=""
	I1213 10:30:20.476128  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:20.476491  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:20.476556  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:20.976222  941476 type.go:168] "Request Body" body=""
	I1213 10:30:20.976298  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:20.976627  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:21.476132  941476 type.go:168] "Request Body" body=""
	I1213 10:30:21.476230  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:21.476520  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:21.976457  941476 type.go:168] "Request Body" body=""
	I1213 10:30:21.976534  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:21.976932  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:22.460571  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:30:22.476131  941476 type.go:168] "Request Body" body=""
	I1213 10:30:22.476203  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:22.476465  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:22.521379  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:30:22.525059  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:30:22.525092  941476 retry.go:31] will retry after 25.139993985s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:30:22.976652  941476 type.go:168] "Request Body" body=""
	I1213 10:30:22.976730  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:22.976986  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:22.977026  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:23.476843  941476 type.go:168] "Request Body" body=""
	I1213 10:30:23.476919  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:23.477283  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:23.975970  941476 type.go:168] "Request Body" body=""
	I1213 10:30:23.976053  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:23.976449  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:24.476161  941476 type.go:168] "Request Body" body=""
	I1213 10:30:24.476245  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:24.476513  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:24.976034  941476 type.go:168] "Request Body" body=""
	I1213 10:30:24.976147  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:24.976481  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:25.476274  941476 type.go:168] "Request Body" body=""
	I1213 10:30:25.476347  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:25.476670  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:25.476736  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:25.976627  941476 type.go:168] "Request Body" body=""
	I1213 10:30:25.976707  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:25.976951  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:26.476709  941476 type.go:168] "Request Body" body=""
	I1213 10:30:26.476781  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:26.477095  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:26.976010  941476 type.go:168] "Request Body" body=""
	I1213 10:30:26.976085  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:26.976390  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:27.475998  941476 type.go:168] "Request Body" body=""
	I1213 10:30:27.476197  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:27.476524  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:27.976149  941476 type.go:168] "Request Body" body=""
	I1213 10:30:27.976232  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:27.976587  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:27.976691  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:28.476061  941476 type.go:168] "Request Body" body=""
	I1213 10:30:28.476140  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:28.476466  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:28.975994  941476 type.go:168] "Request Body" body=""
	I1213 10:30:28.976062  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:28.976382  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:29.475998  941476 type.go:168] "Request Body" body=""
	I1213 10:30:29.476091  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:29.476426  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:29.976195  941476 type.go:168] "Request Body" body=""
	I1213 10:30:29.976285  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:29.976645  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:30.096045  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:30:30.160844  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:30:30.160891  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:30:30.160917  941476 retry.go:31] will retry after 23.835716192s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:30:30.476291  941476 type.go:168] "Request Body" body=""
	I1213 10:30:30.476381  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:30.476623  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:30.476662  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:30.976005  941476 type.go:168] "Request Body" body=""
	I1213 10:30:30.976079  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:30.976448  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:31.475993  941476 type.go:168] "Request Body" body=""
	I1213 10:30:31.476105  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:31.476447  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:31.976396  941476 type.go:168] "Request Body" body=""
	I1213 10:30:31.976460  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:31.976719  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:32.476555  941476 type.go:168] "Request Body" body=""
	I1213 10:30:32.476640  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:32.476947  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:32.476999  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:32.976734  941476 type.go:168] "Request Body" body=""
	I1213 10:30:32.976812  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:32.977150  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:33.476870  941476 type.go:168] "Request Body" body=""
	I1213 10:30:33.476937  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:33.477226  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:33.975973  941476 type.go:168] "Request Body" body=""
	I1213 10:30:33.976043  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:33.976419  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:34.476027  941476 type.go:168] "Request Body" body=""
	I1213 10:30:34.476101  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:34.476510  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:34.975996  941476 type.go:168] "Request Body" body=""
	I1213 10:30:34.976068  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:34.976382  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:34.976435  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:35.476032  941476 type.go:168] "Request Body" body=""
	I1213 10:30:35.476153  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:35.476480  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:35.975911  941476 type.go:168] "Request Body" body=""
	I1213 10:30:35.975996  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:35.976291  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:36.475989  941476 type.go:168] "Request Body" body=""
	I1213 10:30:36.476057  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:36.476387  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:36.976488  941476 type.go:168] "Request Body" body=""
	I1213 10:30:36.976570  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:36.976951  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:36.977012  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:37.476783  941476 type.go:168] "Request Body" body=""
	I1213 10:30:37.476896  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:37.477216  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:37.975936  941476 type.go:168] "Request Body" body=""
	I1213 10:30:37.976015  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:37.976268  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:38.476001  941476 type.go:168] "Request Body" body=""
	I1213 10:30:38.476091  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:38.476424  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:38.976022  941476 type.go:168] "Request Body" body=""
	I1213 10:30:38.976096  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:38.976428  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:39.476003  941476 type.go:168] "Request Body" body=""
	I1213 10:30:39.476084  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:39.476362  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:39.476402  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:39.975986  941476 type.go:168] "Request Body" body=""
	I1213 10:30:39.976076  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:39.976383  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:40.476036  941476 type.go:168] "Request Body" body=""
	I1213 10:30:40.476132  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:40.476454  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:40.976176  941476 type.go:168] "Request Body" body=""
	I1213 10:30:40.976252  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:40.976500  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:41.476026  941476 type.go:168] "Request Body" body=""
	I1213 10:30:41.476124  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:41.476456  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:41.476514  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:41.976469  941476 type.go:168] "Request Body" body=""
	I1213 10:30:41.976585  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:41.976895  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:42.476663  941476 type.go:168] "Request Body" body=""
	I1213 10:30:42.476728  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:42.477006  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:42.976839  941476 type.go:168] "Request Body" body=""
	I1213 10:30:42.976919  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:42.980297  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 10:30:43.476086  941476 type.go:168] "Request Body" body=""
	I1213 10:30:43.476186  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:43.476547  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:43.476623  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:43.976205  941476 type.go:168] "Request Body" body=""
	I1213 10:30:43.976276  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:43.976547  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:44.476019  941476 type.go:168] "Request Body" body=""
	I1213 10:30:44.476113  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:44.476466  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:44.976036  941476 type.go:168] "Request Body" body=""
	I1213 10:30:44.976111  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:44.976440  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:45.475987  941476 type.go:168] "Request Body" body=""
	I1213 10:30:45.476056  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:45.476331  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:45.975925  941476 type.go:168] "Request Body" body=""
	I1213 10:30:45.976003  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:45.976327  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:45.976382  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:46.476024  941476 type.go:168] "Request Body" body=""
	I1213 10:30:46.476103  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:46.476455  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:46.976213  941476 type.go:168] "Request Body" body=""
	I1213 10:30:46.976285  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:46.976538  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:47.475994  941476 type.go:168] "Request Body" body=""
	I1213 10:30:47.476069  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:47.476399  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:47.665860  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:30:47.731394  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:30:47.731441  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:30:47.731460  941476 retry.go:31] will retry after 19.194003802s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:30:47.975899  941476 type.go:168] "Request Body" body=""
	I1213 10:30:47.975974  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:47.976303  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:48.475998  941476 type.go:168] "Request Body" body=""
	I1213 10:30:48.476084  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:48.476410  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:48.476469  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:48.976020  941476 type.go:168] "Request Body" body=""
	I1213 10:30:48.976114  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:48.976440  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:49.476036  941476 type.go:168] "Request Body" body=""
	I1213 10:30:49.476112  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:49.476434  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:49.976100  941476 type.go:168] "Request Body" body=""
	I1213 10:30:49.976167  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:49.976437  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:50.476044  941476 type.go:168] "Request Body" body=""
	I1213 10:30:50.476115  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:50.476434  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:50.976044  941476 type.go:168] "Request Body" body=""
	I1213 10:30:50.976126  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:50.976458  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:50.976519  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:51.476190  941476 type.go:168] "Request Body" body=""
	I1213 10:30:51.476257  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:51.476607  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:51.976524  941476 type.go:168] "Request Body" body=""
	I1213 10:30:51.976618  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:51.976938  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:52.476676  941476 type.go:168] "Request Body" body=""
	I1213 10:30:52.476768  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:52.477095  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:52.976699  941476 type.go:168] "Request Body" body=""
	I1213 10:30:52.976774  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:52.977061  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:52.977104  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:53.476895  941476 type.go:168] "Request Body" body=""
	I1213 10:30:53.476971  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:53.477260  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:53.975931  941476 type.go:168] "Request Body" body=""
	I1213 10:30:53.976008  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:53.976338  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:53.997712  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:30:54.059604  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:30:54.063660  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:30:54.063694  941476 retry.go:31] will retry after 30.126310408s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:30:54.475958  941476 type.go:168] "Request Body" body=""
	I1213 10:30:54.476070  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:54.476392  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:54.976060  941476 type.go:168] "Request Body" body=""
	I1213 10:30:54.976148  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:54.976488  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:55.476185  941476 type.go:168] "Request Body" body=""
	I1213 10:30:55.476260  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:55.476583  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:55.476642  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:55.976527  941476 type.go:168] "Request Body" body=""
	I1213 10:30:55.976599  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:55.976860  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:56.476675  941476 type.go:168] "Request Body" body=""
	I1213 10:30:56.476769  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:56.477141  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:56.976045  941476 type.go:168] "Request Body" body=""
	I1213 10:30:56.976119  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:56.976449  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:57.476156  941476 type.go:168] "Request Body" body=""
	I1213 10:30:57.476236  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:57.476486  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:57.976022  941476 type.go:168] "Request Body" body=""
	I1213 10:30:57.976100  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:57.976440  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:57.976502  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:58.476042  941476 type.go:168] "Request Body" body=""
	I1213 10:30:58.476124  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:58.476455  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:58.976150  941476 type.go:168] "Request Body" body=""
	I1213 10:30:58.976235  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:58.976490  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:59.476182  941476 type.go:168] "Request Body" body=""
	I1213 10:30:59.476288  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:59.476621  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:59.976365  941476 type.go:168] "Request Body" body=""
	I1213 10:30:59.976444  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:59.976775  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:59.976845  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:00.476639  941476 type.go:168] "Request Body" body=""
	I1213 10:31:00.476719  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:00.477025  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:00.976827  941476 type.go:168] "Request Body" body=""
	I1213 10:31:00.976918  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:00.977328  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:01.475937  941476 type.go:168] "Request Body" body=""
	I1213 10:31:01.476035  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:01.476377  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:01.976053  941476 type.go:168] "Request Body" body=""
	I1213 10:31:01.976138  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:01.976399  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:02.476026  941476 type.go:168] "Request Body" body=""
	I1213 10:31:02.476102  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:02.476453  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:02.476508  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:02.976184  941476 type.go:168] "Request Body" body=""
	I1213 10:31:02.976261  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:02.976604  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:03.476321  941476 type.go:168] "Request Body" body=""
	I1213 10:31:03.476405  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:03.476656  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:03.975989  941476 type.go:168] "Request Body" body=""
	I1213 10:31:03.976062  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:03.976373  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:04.476027  941476 type.go:168] "Request Body" body=""
	I1213 10:31:04.476107  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:04.476440  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:04.976145  941476 type.go:168] "Request Body" body=""
	I1213 10:31:04.976215  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:04.976528  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:04.976587  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:05.476046  941476 type.go:168] "Request Body" body=""
	I1213 10:31:05.476128  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:05.476503  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:05.976329  941476 type.go:168] "Request Body" body=""
	I1213 10:31:05.976404  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:05.976818  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:06.476644  941476 type.go:168] "Request Body" body=""
	I1213 10:31:06.476727  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:06.476990  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:06.925824  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:31:06.976406  941476 type.go:168] "Request Body" body=""
	I1213 10:31:06.976485  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:06.976757  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:06.976800  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:06.991385  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:06.991438  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:06.991540  941476 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 10:31:07.476000  941476 type.go:168] "Request Body" body=""
	I1213 10:31:07.476110  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:07.476475  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:07.976033  941476 type.go:168] "Request Body" body=""
	I1213 10:31:07.976116  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:07.976413  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:08.476065  941476 type.go:168] "Request Body" body=""
	I1213 10:31:08.476162  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:08.476480  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:08.976217  941476 type.go:168] "Request Body" body=""
	I1213 10:31:08.976318  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:08.976675  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:09.476351  941476 type.go:168] "Request Body" body=""
	I1213 10:31:09.476424  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:09.476761  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:09.476820  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:09.976571  941476 type.go:168] "Request Body" body=""
	I1213 10:31:09.976678  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:09.977059  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:10.476721  941476 type.go:168] "Request Body" body=""
	I1213 10:31:10.476799  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:10.477208  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:10.975925  941476 type.go:168] "Request Body" body=""
	I1213 10:31:10.975997  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:10.976250  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:11.475973  941476 type.go:168] "Request Body" body=""
	I1213 10:31:11.476050  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:11.476395  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:11.976476  941476 type.go:168] "Request Body" body=""
	I1213 10:31:11.976551  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:11.976955  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:11.977016  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:12.476754  941476 type.go:168] "Request Body" body=""
	I1213 10:31:12.476839  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:12.477117  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:12.976506  941476 type.go:168] "Request Body" body=""
	I1213 10:31:12.976583  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:12.976915  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:13.476748  941476 type.go:168] "Request Body" body=""
	I1213 10:31:13.476846  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:13.477198  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:13.975894  941476 type.go:168] "Request Body" body=""
	I1213 10:31:13.975961  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:13.976227  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:14.475943  941476 type.go:168] "Request Body" body=""
	I1213 10:31:14.476062  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:14.476400  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:14.476469  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:14.976011  941476 type.go:168] "Request Body" body=""
	I1213 10:31:14.976112  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:14.976509  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:15.476219  941476 type.go:168] "Request Body" body=""
	I1213 10:31:15.476292  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:15.476567  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:15.976650  941476 type.go:168] "Request Body" body=""
	I1213 10:31:15.976734  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:15.977073  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:16.476854  941476 type.go:168] "Request Body" body=""
	I1213 10:31:16.476948  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:16.477273  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:16.477330  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:16.976000  941476 type.go:168] "Request Body" body=""
	I1213 10:31:16.976073  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:16.976427  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:17.476545  941476 type.go:168] "Request Body" body=""
	I1213 10:31:17.476677  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:17.477181  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:17.976852  941476 type.go:168] "Request Body" body=""
	I1213 10:31:17.976935  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:17.977261  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:18.475978  941476 type.go:168] "Request Body" body=""
	I1213 10:31:18.476056  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:18.476322  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:18.976069  941476 type.go:168] "Request Body" body=""
	I1213 10:31:18.976149  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:18.976500  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:18.976571  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:19.476245  941476 type.go:168] "Request Body" body=""
	I1213 10:31:19.476328  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:19.476669  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:19.976355  941476 type.go:168] "Request Body" body=""
	I1213 10:31:19.976423  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:19.976681  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:20.476070  941476 type.go:168] "Request Body" body=""
	I1213 10:31:20.476146  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:20.476464  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:20.976239  941476 type.go:168] "Request Body" body=""
	I1213 10:31:20.976313  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:20.976664  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:20.976722  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:21.476108  941476 type.go:168] "Request Body" body=""
	I1213 10:31:21.476196  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:21.476546  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:21.976459  941476 type.go:168] "Request Body" body=""
	I1213 10:31:21.976535  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:21.976854  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:22.476728  941476 type.go:168] "Request Body" body=""
	I1213 10:31:22.476820  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:22.477138  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:22.976866  941476 type.go:168] "Request Body" body=""
	I1213 10:31:22.976937  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:22.977188  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:22.977229  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:23.475910  941476 type.go:168] "Request Body" body=""
	I1213 10:31:23.475992  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:23.476337  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:23.976033  941476 type.go:168] "Request Body" body=""
	I1213 10:31:23.976146  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:23.976483  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:24.190915  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:31:24.248888  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:24.248934  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:24.249045  941476 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 10:31:24.254122  941476 out.go:179] * Enabled addons: 
	I1213 10:31:24.256914  941476 addons.go:530] duration metric: took 1m38.304545325s for enable addons: enabled=[]
	I1213 10:31:24.476214  941476 type.go:168] "Request Body" body=""
	I1213 10:31:24.476305  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:24.476571  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:24.976075  941476 type.go:168] "Request Body" body=""
	I1213 10:31:24.976150  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:24.976469  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:25.475994  941476 type.go:168] "Request Body" body=""
	I1213 10:31:25.476100  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:25.476424  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:25.476482  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:25.976304  941476 type.go:168] "Request Body" body=""
	I1213 10:31:25.976372  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:25.976622  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:26.476058  941476 type.go:168] "Request Body" body=""
	I1213 10:31:26.476134  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:26.476464  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:26.976032  941476 type.go:168] "Request Body" body=""
	I1213 10:31:26.976107  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:26.976412  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:27.475988  941476 type.go:168] "Request Body" body=""
	I1213 10:31:27.476056  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:27.476317  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:27.976108  941476 type.go:168] "Request Body" body=""
	I1213 10:31:27.976196  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:27.976535  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:27.976591  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:28.476254  941476 type.go:168] "Request Body" body=""
	I1213 10:31:28.476381  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:28.476716  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:28.975973  941476 type.go:168] "Request Body" body=""
	I1213 10:31:28.976047  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:28.976353  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:29.476048  941476 type.go:168] "Request Body" body=""
	I1213 10:31:29.476126  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:29.476474  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:29.976170  941476 type.go:168] "Request Body" body=""
	I1213 10:31:29.976247  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:29.976617  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:29.976678  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:30.476323  941476 type.go:168] "Request Body" body=""
	I1213 10:31:30.476391  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:30.476664  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:30.976054  941476 type.go:168] "Request Body" body=""
	I1213 10:31:30.976128  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:30.976456  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:31.476168  941476 type.go:168] "Request Body" body=""
	I1213 10:31:31.476269  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:31.476567  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:31.976505  941476 type.go:168] "Request Body" body=""
	I1213 10:31:31.976574  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:31.976850  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:31.976891  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:32.476715  941476 type.go:168] "Request Body" body=""
	I1213 10:31:32.476794  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:32.477154  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:32.976964  941476 type.go:168] "Request Body" body=""
	I1213 10:31:32.977041  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:32.977388  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:33.476013  941476 type.go:168] "Request Body" body=""
	I1213 10:31:33.476079  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:33.476329  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:33.976034  941476 type.go:168] "Request Body" body=""
	I1213 10:31:33.976119  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:33.976457  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:34.476014  941476 type.go:168] "Request Body" body=""
	I1213 10:31:34.476100  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:34.476438  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:34.476494  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:34.976011  941476 type.go:168] "Request Body" body=""
	I1213 10:31:34.976087  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:34.976342  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:35.476018  941476 type.go:168] "Request Body" body=""
	I1213 10:31:35.476143  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:35.476462  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:35.976397  941476 type.go:168] "Request Body" body=""
	I1213 10:31:35.976481  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:35.976852  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:36.476416  941476 type.go:168] "Request Body" body=""
	I1213 10:31:36.476490  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:36.476745  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:36.476785  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:36.976682  941476 type.go:168] "Request Body" body=""
	I1213 10:31:36.976776  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:36.977178  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:37.476965  941476 type.go:168] "Request Body" body=""
	I1213 10:31:37.477045  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:37.477383  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:37.976029  941476 type.go:168] "Request Body" body=""
	I1213 10:31:37.976095  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:37.976361  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:38.476025  941476 type.go:168] "Request Body" body=""
	I1213 10:31:38.476107  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:38.476445  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:38.975981  941476 type.go:168] "Request Body" body=""
	I1213 10:31:38.976069  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:38.976409  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:38.976469  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:39.476151  941476 type.go:168] "Request Body" body=""
	I1213 10:31:39.476225  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:39.476508  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:39.976053  941476 type.go:168] "Request Body" body=""
	I1213 10:31:39.976130  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:39.976448  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:40.476042  941476 type.go:168] "Request Body" body=""
	I1213 10:31:40.476119  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:40.476446  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:40.976091  941476 type.go:168] "Request Body" body=""
	I1213 10:31:40.976170  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:40.976430  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:41.476048  941476 type.go:168] "Request Body" body=""
	I1213 10:31:41.476125  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:41.476626  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:41.476675  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:41.976630  941476 type.go:168] "Request Body" body=""
	I1213 10:31:41.976743  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:41.977553  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:42.475972  941476 type.go:168] "Request Body" body=""
	I1213 10:31:42.476061  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:42.476364  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:42.976014  941476 type.go:168] "Request Body" body=""
	I1213 10:31:42.976100  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:42.976440  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:43.476024  941476 type.go:168] "Request Body" body=""
	I1213 10:31:43.476107  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:43.476429  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:43.975985  941476 type.go:168] "Request Body" body=""
	I1213 10:31:43.976054  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:43.976344  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:43.976397  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:44.476016  941476 type.go:168] "Request Body" body=""
	I1213 10:31:44.476093  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:44.476411  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:44.976030  941476 type.go:168] "Request Body" body=""
	I1213 10:31:44.976151  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:44.976503  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:45.476045  941476 type.go:168] "Request Body" body=""
	I1213 10:31:45.476120  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:45.476386  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:45.976018  941476 type.go:168] "Request Body" body=""
	I1213 10:31:45.976092  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:45.976393  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:45.976440  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:46.476013  941476 type.go:168] "Request Body" body=""
	I1213 10:31:46.476094  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:46.476429  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:46.975976  941476 type.go:168] "Request Body" body=""
	I1213 10:31:46.976048  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:46.976402  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:47.476027  941476 type.go:168] "Request Body" body=""
	I1213 10:31:47.476109  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:47.476419  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:47.976020  941476 type.go:168] "Request Body" body=""
	I1213 10:31:47.976095  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:47.976422  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:47.976480  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:48.476004  941476 type.go:168] "Request Body" body=""
	I1213 10:31:48.476083  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:48.476391  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:48.976026  941476 type.go:168] "Request Body" body=""
	I1213 10:31:48.976109  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:48.976439  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:49.476029  941476 type.go:168] "Request Body" body=""
	I1213 10:31:49.476115  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:49.476450  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:49.976130  941476 type.go:168] "Request Body" body=""
	I1213 10:31:49.976202  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:49.976477  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:49.976519  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:50.476169  941476 type.go:168] "Request Body" body=""
	I1213 10:31:50.476246  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:50.476586  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:50.976287  941476 type.go:168] "Request Body" body=""
	I1213 10:31:50.976360  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:50.976729  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:51.476495  941476 type.go:168] "Request Body" body=""
	I1213 10:31:51.476574  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:51.476839  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:51.976777  941476 type.go:168] "Request Body" body=""
	I1213 10:31:51.976892  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:51.977255  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:51.977312  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:52.476019  941476 type.go:168] "Request Body" body=""
	I1213 10:31:52.476115  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:52.476505  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:52.975986  941476 type.go:168] "Request Body" body=""
	I1213 10:31:52.976066  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:52.976377  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:53.476003  941476 type.go:168] "Request Body" body=""
	I1213 10:31:53.476081  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:53.476419  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:53.976122  941476 type.go:168] "Request Body" body=""
	I1213 10:31:53.976204  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:53.976539  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:54.476283  941476 type.go:168] "Request Body" body=""
	I1213 10:31:54.476358  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:54.476609  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:54.476652  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:54.976007  941476 type.go:168] "Request Body" body=""
	I1213 10:31:54.976081  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:54.976403  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:55.476020  941476 type.go:168] "Request Body" body=""
	I1213 10:31:55.476101  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:55.476465  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:55.976175  941476 type.go:168] "Request Body" body=""
	I1213 10:31:55.976246  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:55.976517  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:56.476006  941476 type.go:168] "Request Body" body=""
	I1213 10:31:56.476086  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:56.476452  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:56.976011  941476 type.go:168] "Request Body" body=""
	I1213 10:31:56.976090  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:56.976453  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:56.976513  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:57.476145  941476 type.go:168] "Request Body" body=""
	I1213 10:31:57.476215  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:57.476478  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:57.976009  941476 type.go:168] "Request Body" body=""
	I1213 10:31:57.976085  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:57.976451  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:58.476036  941476 type.go:168] "Request Body" body=""
	I1213 10:31:58.476114  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:58.476420  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:58.976112  941476 type.go:168] "Request Body" body=""
	I1213 10:31:58.976184  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:58.976451  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:59.476021  941476 type.go:168] "Request Body" body=""
	I1213 10:31:59.476097  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:59.476444  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:59.476501  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:59.976030  941476 type.go:168] "Request Body" body=""
	I1213 10:31:59.976103  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:59.976445  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:00.476019  941476 type.go:168] "Request Body" body=""
	I1213 10:32:00.476100  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:00.476422  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:00.976042  941476 type.go:168] "Request Body" body=""
	I1213 10:32:00.976122  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:00.976457  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:01.476038  941476 type.go:168] "Request Body" body=""
	I1213 10:32:01.476135  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:01.476461  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:01.476525  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:01.976433  941476 type.go:168] "Request Body" body=""
	I1213 10:32:01.976500  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:01.976760  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:02.476646  941476 type.go:168] "Request Body" body=""
	I1213 10:32:02.476736  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:02.477125  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:02.976957  941476 type.go:168] "Request Body" body=""
	I1213 10:32:02.977037  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:02.977386  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:03.476003  941476 type.go:168] "Request Body" body=""
	I1213 10:32:03.476067  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:03.476327  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:03.976021  941476 type.go:168] "Request Body" body=""
	I1213 10:32:03.976099  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:03.976425  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:03.976487  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:04.476032  941476 type.go:168] "Request Body" body=""
	I1213 10:32:04.476118  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:04.476477  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:04.976189  941476 type.go:168] "Request Body" body=""
	I1213 10:32:04.976259  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:04.976524  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:05.476054  941476 type.go:168] "Request Body" body=""
	I1213 10:32:05.476131  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:05.476494  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:05.976279  941476 type.go:168] "Request Body" body=""
	I1213 10:32:05.976358  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:05.976703  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:05.976759  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:06.476417  941476 type.go:168] "Request Body" body=""
	I1213 10:32:06.476497  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:06.476760  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:06.976645  941476 type.go:168] "Request Body" body=""
	I1213 10:32:06.976724  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:06.977077  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:07.476899  941476 type.go:168] "Request Body" body=""
	I1213 10:32:07.476981  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:07.477364  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:07.976070  941476 type.go:168] "Request Body" body=""
	I1213 10:32:07.976148  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:07.976442  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:08.476070  941476 type.go:168] "Request Body" body=""
	I1213 10:32:08.476152  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:08.476469  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:08.476525  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:08.976049  941476 type.go:168] "Request Body" body=""
	I1213 10:32:08.976129  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:08.976453  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:09.475983  941476 type.go:168] "Request Body" body=""
	I1213 10:32:09.476056  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:09.476367  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:09.976056  941476 type.go:168] "Request Body" body=""
	I1213 10:32:09.976139  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:09.976488  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:10.476201  941476 type.go:168] "Request Body" body=""
	I1213 10:32:10.476278  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:10.476604  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:10.476662  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:10.975985  941476 type.go:168] "Request Body" body=""
	I1213 10:32:10.976066  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:10.976386  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:11.476030  941476 type.go:168] "Request Body" body=""
	I1213 10:32:11.476107  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:11.476435  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:11.976014  941476 type.go:168] "Request Body" body=""
	I1213 10:32:11.976091  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:11.976414  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:12.475989  941476 type.go:168] "Request Body" body=""
	I1213 10:32:12.476059  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:12.476328  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:12.976032  941476 type.go:168] "Request Body" body=""
	I1213 10:32:12.976113  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:12.976433  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:12.976487  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:13.476035  941476 type.go:168] "Request Body" body=""
	I1213 10:32:13.476108  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:13.476457  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:13.976139  941476 type.go:168] "Request Body" body=""
	I1213 10:32:13.976217  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:13.976477  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:14.476065  941476 type.go:168] "Request Body" body=""
	I1213 10:32:14.476149  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:14.476488  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:14.976200  941476 type.go:168] "Request Body" body=""
	I1213 10:32:14.976280  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:14.976630  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:14.976691  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:15.476331  941476 type.go:168] "Request Body" body=""
	I1213 10:32:15.476407  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:15.476718  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:15.976843  941476 type.go:168] "Request Body" body=""
	I1213 10:32:15.976916  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:15.977265  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:16.476944  941476 type.go:168] "Request Body" body=""
	I1213 10:32:16.477018  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:16.477394  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:16.976098  941476 type.go:168] "Request Body" body=""
	I1213 10:32:16.976173  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:16.976437  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:17.476036  941476 type.go:168] "Request Body" body=""
	I1213 10:32:17.476113  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:17.476455  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:17.476515  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:17.976191  941476 type.go:168] "Request Body" body=""
	I1213 10:32:17.976268  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:17.976582  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:18.475997  941476 type.go:168] "Request Body" body=""
	I1213 10:32:18.476079  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:18.476340  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:18.976113  941476 type.go:168] "Request Body" body=""
	I1213 10:32:18.976206  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:18.976563  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:19.476049  941476 type.go:168] "Request Body" body=""
	I1213 10:32:19.476129  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:19.476456  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:19.976098  941476 type.go:168] "Request Body" body=""
	I1213 10:32:19.976166  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:19.976467  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:19.976522  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:20.476043  941476 type.go:168] "Request Body" body=""
	I1213 10:32:20.476118  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:20.476441  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:20.976163  941476 type.go:168] "Request Body" body=""
	I1213 10:32:20.976242  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:20.976531  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:21.475975  941476 type.go:168] "Request Body" body=""
	I1213 10:32:21.476045  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:21.476354  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:21.976036  941476 type.go:168] "Request Body" body=""
	I1213 10:32:21.976111  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:21.976471  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:22.476157  941476 type.go:168] "Request Body" body=""
	I1213 10:32:22.476236  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:22.476595  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:22.476649  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:22.975989  941476 type.go:168] "Request Body" body=""
	I1213 10:32:22.976063  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:22.976350  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:23.476043  941476 type.go:168] "Request Body" body=""
	I1213 10:32:23.476117  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:23.476465  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:23.976206  941476 type.go:168] "Request Body" body=""
	I1213 10:32:23.976283  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:23.976637  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:24.475985  941476 type.go:168] "Request Body" body=""
	I1213 10:32:24.476065  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:24.476346  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:24.976054  941476 type.go:168] "Request Body" body=""
	I1213 10:32:24.976136  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:24.976464  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:24.976520  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:25.476178  941476 type.go:168] "Request Body" body=""
	I1213 10:32:25.476258  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:25.476612  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:25.976593  941476 type.go:168] "Request Body" body=""
	I1213 10:32:25.976662  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:25.976936  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:26.476747  941476 type.go:168] "Request Body" body=""
	I1213 10:32:26.476821  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:26.477090  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:26.975948  941476 type.go:168] "Request Body" body=""
	I1213 10:32:26.976024  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:26.976402  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:27.476084  941476 type.go:168] "Request Body" body=""
	I1213 10:32:27.476158  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:27.476433  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:27.476474  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:27.976004  941476 type.go:168] "Request Body" body=""
	I1213 10:32:27.976087  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:27.976410  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:28.476155  941476 type.go:168] "Request Body" body=""
	I1213 10:32:28.476244  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:28.476588  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:28.976255  941476 type.go:168] "Request Body" body=""
	I1213 10:32:28.976331  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:28.976594  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:29.476028  941476 type.go:168] "Request Body" body=""
	I1213 10:32:29.476107  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:29.476476  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:29.476531  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:29.976055  941476 type.go:168] "Request Body" body=""
	I1213 10:32:29.976132  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:29.976460  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:30.475985  941476 type.go:168] "Request Body" body=""
	I1213 10:32:30.476059  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:30.476378  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:30.976032  941476 type.go:168] "Request Body" body=""
	I1213 10:32:30.976108  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:30.976436  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:31.476042  941476 type.go:168] "Request Body" body=""
	I1213 10:32:31.476119  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:31.476446  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:31.976398  941476 type.go:168] "Request Body" body=""
	I1213 10:32:31.976466  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:31.976719  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:31.976758  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:32.476588  941476 type.go:168] "Request Body" body=""
	I1213 10:32:32.476670  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:32.477064  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:32.976842  941476 type.go:168] "Request Body" body=""
	I1213 10:32:32.976917  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:32.977255  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:33.475960  941476 type.go:168] "Request Body" body=""
	I1213 10:32:33.476032  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:33.476294  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:33.975991  941476 type.go:168] "Request Body" body=""
	I1213 10:32:33.976070  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:33.976448  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:34.476153  941476 type.go:168] "Request Body" body=""
	I1213 10:32:34.476241  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:34.476568  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:34.476624  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:34.976261  941476 type.go:168] "Request Body" body=""
	I1213 10:32:34.976336  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:34.976618  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:35.476037  941476 type.go:168] "Request Body" body=""
	I1213 10:32:35.476116  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:35.476453  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:35.976396  941476 type.go:168] "Request Body" body=""
	I1213 10:32:35.976472  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:35.976804  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:36.476554  941476 type.go:168] "Request Body" body=""
	I1213 10:32:36.476624  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:36.476895  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:36.476937  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:36.976884  941476 type.go:168] "Request Body" body=""
	I1213 10:32:36.976958  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:36.977293  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:37.476031  941476 type.go:168] "Request Body" body=""
	I1213 10:32:37.476114  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:37.476465  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:37.976004  941476 type.go:168] "Request Body" body=""
	I1213 10:32:37.976074  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:37.976340  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:38.476062  941476 type.go:168] "Request Body" body=""
	I1213 10:32:38.476138  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:38.476437  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:38.976005  941476 type.go:168] "Request Body" body=""
	I1213 10:32:38.976078  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:38.976403  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:38.976454  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:39.475982  941476 type.go:168] "Request Body" body=""
	I1213 10:32:39.476059  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:39.476428  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:39.976002  941476 type.go:168] "Request Body" body=""
	I1213 10:32:39.976082  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:39.976414  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:40.476038  941476 type.go:168] "Request Body" body=""
	I1213 10:32:40.476118  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:40.476462  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:40.976166  941476 type.go:168] "Request Body" body=""
	I1213 10:32:40.976245  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:40.976502  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:40.976544  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:41.476000  941476 type.go:168] "Request Body" body=""
	I1213 10:32:41.476073  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:41.476433  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:41.976208  941476 type.go:168] "Request Body" body=""
	I1213 10:32:41.976289  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:41.976643  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:42.475983  941476 type.go:168] "Request Body" body=""
	I1213 10:32:42.476059  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:42.476353  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:42.976069  941476 type.go:168] "Request Body" body=""
	I1213 10:32:42.976137  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:42.976430  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:43.476306  941476 type.go:168] "Request Body" body=""
	I1213 10:32:43.476396  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:43.476750  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:43.476809  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:43.976720  941476 type.go:168] "Request Body" body=""
	I1213 10:32:43.976798  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:43.977089  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:44.477009  941476 type.go:168] "Request Body" body=""
	I1213 10:32:44.477085  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:44.477386  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:44.976767  941476 type.go:168] "Request Body" body=""
	I1213 10:32:44.976848  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:44.977176  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:45.475924  941476 type.go:168] "Request Body" body=""
	I1213 10:32:45.476036  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:45.476370  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:45.975913  941476 type.go:168] "Request Body" body=""
	I1213 10:32:45.975984  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:45.976317  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:45.976387  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:46.476025  941476 type.go:168] "Request Body" body=""
	I1213 10:32:46.476110  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:46.476450  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:46.975972  941476 type.go:168] "Request Body" body=""
	I1213 10:32:46.976040  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:46.976351  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:47.476004  941476 type.go:168] "Request Body" body=""
	I1213 10:32:47.476136  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:47.476459  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:47.976017  941476 type.go:168] "Request Body" body=""
	I1213 10:32:47.976089  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:47.976421  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:47.976477  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:48.476128  941476 type.go:168] "Request Body" body=""
	I1213 10:32:48.476203  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:48.476459  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:48.976015  941476 type.go:168] "Request Body" body=""
	I1213 10:32:48.976089  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:48.976419  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:49.476032  941476 type.go:168] "Request Body" body=""
	I1213 10:32:49.476106  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:49.476423  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:49.975990  941476 type.go:168] "Request Body" body=""
	I1213 10:32:49.976065  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:49.976312  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:50.476026  941476 type.go:168] "Request Body" body=""
	I1213 10:32:50.476104  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:50.476430  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:50.476486  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:50.976049  941476 type.go:168] "Request Body" body=""
	I1213 10:32:50.976131  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:50.976481  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:51.476188  941476 type.go:168] "Request Body" body=""
	I1213 10:32:51.476259  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:51.476529  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:51.976428  941476 type.go:168] "Request Body" body=""
	I1213 10:32:51.976507  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:51.976844  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:52.476643  941476 type.go:168] "Request Body" body=""
	I1213 10:32:52.476721  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:52.477067  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:52.477124  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:52.976867  941476 type.go:168] "Request Body" body=""
	I1213 10:32:52.976936  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:52.977207  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:53.475946  941476 type.go:168] "Request Body" body=""
	I1213 10:32:53.476027  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:53.476328  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:53.975930  941476 type.go:168] "Request Body" body=""
	I1213 10:32:53.976034  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:53.976391  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:54.475960  941476 type.go:168] "Request Body" body=""
	I1213 10:32:54.476035  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:54.476297  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:54.975999  941476 type.go:168] "Request Body" body=""
	I1213 10:32:54.976070  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:54.976357  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:54.976407  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:55.476011  941476 type.go:168] "Request Body" body=""
	I1213 10:32:55.476101  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:55.476377  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:55.976254  941476 type.go:168] "Request Body" body=""
	I1213 10:32:55.976330  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:55.976613  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:56.476032  941476 type.go:168] "Request Body" body=""
	I1213 10:32:56.476109  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:56.476457  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:56.976037  941476 type.go:168] "Request Body" body=""
	I1213 10:32:56.976111  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:56.976434  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:56.976489  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:57.475980  941476 type.go:168] "Request Body" body=""
	I1213 10:32:57.476061  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:57.476382  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:57.976008  941476 type.go:168] "Request Body" body=""
	I1213 10:32:57.976084  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:57.976417  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:58.476035  941476 type.go:168] "Request Body" body=""
	I1213 10:32:58.476116  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:58.476441  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:58.975991  941476 type.go:168] "Request Body" body=""
	I1213 10:32:58.976067  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:58.976351  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:59.476097  941476 type.go:168] "Request Body" body=""
	I1213 10:32:59.476175  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:59.476508  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:59.476569  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:59.976006  941476 type.go:168] "Request Body" body=""
	I1213 10:32:59.976086  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:59.976416  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:00.476102  941476 type.go:168] "Request Body" body=""
	I1213 10:33:00.476181  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:00.476460  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:00.976047  941476 type.go:168] "Request Body" body=""
	I1213 10:33:00.976134  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:00.976487  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:01.476029  941476 type.go:168] "Request Body" body=""
	I1213 10:33:01.476105  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:01.476429  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:01.975971  941476 type.go:168] "Request Body" body=""
	I1213 10:33:01.976042  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:01.976355  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:01.976407  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:02.476023  941476 type.go:168] "Request Body" body=""
	I1213 10:33:02.476094  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:02.476438  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:02.976170  941476 type.go:168] "Request Body" body=""
	I1213 10:33:02.976252  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:02.976630  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:03.476323  941476 type.go:168] "Request Body" body=""
	I1213 10:33:03.476399  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:03.476657  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:03.976052  941476 type.go:168] "Request Body" body=""
	I1213 10:33:03.976134  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:03.976463  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:03.976518  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:04.476187  941476 type.go:168] "Request Body" body=""
	I1213 10:33:04.476262  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:04.476613  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:04.976299  941476 type.go:168] "Request Body" body=""
	I1213 10:33:04.976377  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:04.976641  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:05.476304  941476 type.go:168] "Request Body" body=""
	I1213 10:33:05.476380  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:05.476711  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:05.976815  941476 type.go:168] "Request Body" body=""
	I1213 10:33:05.976895  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:05.977239  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:05.977294  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:06.475975  941476 type.go:168] "Request Body" body=""
	I1213 10:33:06.476047  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:06.476308  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:06.976045  941476 type.go:168] "Request Body" body=""
	I1213 10:33:06.976148  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:06.976516  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:07.476071  941476 type.go:168] "Request Body" body=""
	I1213 10:33:07.476148  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:07.476544  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:07.976078  941476 type.go:168] "Request Body" body=""
	I1213 10:33:07.976149  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:07.976402  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:08.476027  941476 type.go:168] "Request Body" body=""
	I1213 10:33:08.476099  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:08.476433  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:08.476487  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:08.976023  941476 type.go:168] "Request Body" body=""
	I1213 10:33:08.976112  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:08.976462  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:09.476176  941476 type.go:168] "Request Body" body=""
	I1213 10:33:09.476251  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:09.476526  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:09.976025  941476 type.go:168] "Request Body" body=""
	I1213 10:33:09.976104  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:09.976463  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:10.476184  941476 type.go:168] "Request Body" body=""
	I1213 10:33:10.476271  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:10.476609  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:10.476665  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:10.975992  941476 type.go:168] "Request Body" body=""
	I1213 10:33:10.976076  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:10.976358  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:11.476054  941476 type.go:168] "Request Body" body=""
	I1213 10:33:11.476129  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:11.476473  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:11.976030  941476 type.go:168] "Request Body" body=""
	I1213 10:33:11.976106  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:11.976465  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:12.476140  941476 type.go:168] "Request Body" body=""
	I1213 10:33:12.476209  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:12.476469  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:12.976013  941476 type.go:168] "Request Body" body=""
	I1213 10:33:12.976099  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:12.976394  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:12.976444  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:13.476111  941476 type.go:168] "Request Body" body=""
	I1213 10:33:13.476187  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:13.476533  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:13.976215  941476 type.go:168] "Request Body" body=""
	I1213 10:33:13.976284  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:13.976554  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:14.476030  941476 type.go:168] "Request Body" body=""
	I1213 10:33:14.476112  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:14.476450  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:14.976164  941476 type.go:168] "Request Body" body=""
	I1213 10:33:14.976241  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:14.976581  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:14.976644  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:15.475977  941476 type.go:168] "Request Body" body=""
	I1213 10:33:15.476046  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:15.476298  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:15.975945  941476 type.go:168] "Request Body" body=""
	I1213 10:33:15.976032  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:15.976414  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:16.476144  941476 type.go:168] "Request Body" body=""
	I1213 10:33:16.476219  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:16.476559  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:16.976466  941476 type.go:168] "Request Body" body=""
	I1213 10:33:16.976541  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:16.976809  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:16.976860  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:17.476687  941476 type.go:168] "Request Body" body=""
	I1213 10:33:17.476761  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:17.477087  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:17.976932  941476 type.go:168] "Request Body" body=""
	I1213 10:33:17.977005  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:17.977321  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:18.476000  941476 type.go:168] "Request Body" body=""
	I1213 10:33:18.476076  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:18.476392  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:18.976021  941476 type.go:168] "Request Body" body=""
	I1213 10:33:18.976114  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:18.976472  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:19.476006  941476 type.go:168] "Request Body" body=""
	I1213 10:33:19.476090  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:19.476437  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:19.476492  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:19.975984  941476 type.go:168] "Request Body" body=""
	I1213 10:33:19.976061  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:19.976331  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:20.476028  941476 type.go:168] "Request Body" body=""
	I1213 10:33:20.476114  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:20.476446  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:20.976140  941476 type.go:168] "Request Body" body=""
	I1213 10:33:20.976215  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:20.976570  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:21.476259  941476 type.go:168] "Request Body" body=""
	I1213 10:33:21.476335  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:21.476598  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:21.476641  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:21.976641  941476 type.go:168] "Request Body" body=""
	I1213 10:33:21.976721  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:21.977055  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:22.476842  941476 type.go:168] "Request Body" body=""
	I1213 10:33:22.476921  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:22.477263  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:22.975958  941476 type.go:168] "Request Body" body=""
	I1213 10:33:22.976026  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:22.976279  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:23.476028  941476 type.go:168] "Request Body" body=""
	I1213 10:33:23.476115  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:23.476440  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:23.976154  941476 type.go:168] "Request Body" body=""
	I1213 10:33:23.976230  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:23.976599  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:23.976655  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:24.476302  941476 type.go:168] "Request Body" body=""
	I1213 10:33:24.476382  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:24.476643  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:24.976013  941476 type.go:168] "Request Body" body=""
	I1213 10:33:24.976088  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:24.976409  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:25.476125  941476 type.go:168] "Request Body" body=""
	I1213 10:33:25.476201  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:25.476538  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:25.976508  941476 type.go:168] "Request Body" body=""
	I1213 10:33:25.976580  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:25.976838  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:25.976879  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:26.476580  941476 type.go:168] "Request Body" body=""
	I1213 10:33:26.476662  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:26.476989  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:26.975922  941476 type.go:168] "Request Body" body=""
	I1213 10:33:26.976010  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:26.976354  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:27.476098  941476 type.go:168] "Request Body" body=""
	I1213 10:33:27.476184  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:27.476458  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:27.976019  941476 type.go:168] "Request Body" body=""
	I1213 10:33:27.976107  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:27.976466  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:28.476177  941476 type.go:168] "Request Body" body=""
	I1213 10:33:28.476256  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:28.476603  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:28.476659  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:28.975989  941476 type.go:168] "Request Body" body=""
	I1213 10:33:28.976063  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:28.976324  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:29.475991  941476 type.go:168] "Request Body" body=""
	I1213 10:33:29.476066  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:29.476404  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:29.975996  941476 type.go:168] "Request Body" body=""
	I1213 10:33:29.976077  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:29.976425  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:30.476099  941476 type.go:168] "Request Body" body=""
	I1213 10:33:30.476165  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:30.476425  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:30.976057  941476 type.go:168] "Request Body" body=""
	I1213 10:33:30.976139  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:30.976428  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:30.976479  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:31.476028  941476 type.go:168] "Request Body" body=""
	I1213 10:33:31.476102  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:31.476433  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:31.975996  941476 type.go:168] "Request Body" body=""
	I1213 10:33:31.976062  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:31.976317  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:32.476036  941476 type.go:168] "Request Body" body=""
	I1213 10:33:32.476123  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:32.476463  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:32.976156  941476 type.go:168] "Request Body" body=""
	I1213 10:33:32.976239  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:32.976579  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:32.976636  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:33.476103  941476 type.go:168] "Request Body" body=""
	I1213 10:33:33.476175  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:33.476433  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:33.976025  941476 type.go:168] "Request Body" body=""
	I1213 10:33:33.976098  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:33.976421  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:34.476116  941476 type.go:168] "Request Body" body=""
	I1213 10:33:34.476189  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:34.476493  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:34.976055  941476 type.go:168] "Request Body" body=""
	I1213 10:33:34.976123  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:34.976382  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:35.476023  941476 type.go:168] "Request Body" body=""
	I1213 10:33:35.476099  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:35.476443  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:35.476499  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:35.975939  941476 type.go:168] "Request Body" body=""
	I1213 10:33:35.976014  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:35.976367  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:36.475982  941476 type.go:168] "Request Body" body=""
	I1213 10:33:36.476086  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:36.476409  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:36.976046  941476 type.go:168] "Request Body" body=""
	I1213 10:33:36.976117  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:36.976443  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:37.476164  941476 type.go:168] "Request Body" body=""
	I1213 10:33:37.476242  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:37.476524  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:37.476575  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:37.976198  941476 type.go:168] "Request Body" body=""
	I1213 10:33:37.976275  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:37.976533  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:38.476039  941476 type.go:168] "Request Body" body=""
	I1213 10:33:38.476112  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:38.476422  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:38.976114  941476 type.go:168] "Request Body" body=""
	I1213 10:33:38.976199  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:38.976530  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:39.476088  941476 type.go:168] "Request Body" body=""
	I1213 10:33:39.476161  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:39.476422  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:39.976009  941476 type.go:168] "Request Body" body=""
	I1213 10:33:39.976084  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:39.976397  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:39.976449  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:40.476000  941476 type.go:168] "Request Body" body=""
	I1213 10:33:40.476076  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:40.476414  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:40.976095  941476 type.go:168] "Request Body" body=""
	I1213 10:33:40.976167  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:40.976436  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:41.476022  941476 type.go:168] "Request Body" body=""
	I1213 10:33:41.476094  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:41.476397  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:41.976013  941476 type.go:168] "Request Body" body=""
	I1213 10:33:41.976092  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:41.976658  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:41.976706  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:42.475980  941476 type.go:168] "Request Body" body=""
	I1213 10:33:42.476055  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:42.476675  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:42.976377  941476 type.go:168] "Request Body" body=""
	I1213 10:33:42.976455  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:42.976815  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:43.476621  941476 type.go:168] "Request Body" body=""
	I1213 10:33:43.476701  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:43.477037  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:43.976823  941476 type.go:168] "Request Body" body=""
	I1213 10:33:43.976888  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:43.977141  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:43.977181  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:44.476934  941476 type.go:168] "Request Body" body=""
	I1213 10:33:44.477006  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:44.477335  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:44.976009  941476 type.go:168] "Request Body" body=""
	I1213 10:33:44.976100  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:44.976470  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:45.476027  941476 type.go:168] "Request Body" body=""
	I1213 10:33:45.476103  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:45.476385  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:45.976244  941476 type.go:168] "Request Body" body=""
	I1213 10:33:45.976320  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:45.976638  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:46.476051  941476 type.go:168] "Request Body" body=""
	I1213 10:33:46.476134  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:46.476479  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:46.476535  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:46.975987  941476 type.go:168] "Request Body" body=""
	I1213 10:33:46.976061  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:46.976313  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:47.476031  941476 type.go:168] "Request Body" body=""
	I1213 10:33:47.476113  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:47.476457  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:47.976041  941476 type.go:168] "Request Body" body=""
	I1213 10:33:47.976125  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:47.976473  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:48.476166  941476 type.go:168] "Request Body" body=""
	I1213 10:33:48.476241  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:48.476522  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:48.476583  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:48.975991  941476 type.go:168] "Request Body" body=""
	I1213 10:33:48.976075  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:48.976407  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:49.476115  941476 type.go:168] "Request Body" body=""
	I1213 10:33:49.476190  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:49.476513  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:49.975984  941476 type.go:168] "Request Body" body=""
	I1213 10:33:49.976052  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:49.976304  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:50.476021  941476 type.go:168] "Request Body" body=""
	I1213 10:33:50.476105  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:50.476430  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:50.976125  941476 type.go:168] "Request Body" body=""
	I1213 10:33:50.976206  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:50.976556  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:50.976613  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:51.476129  941476 type.go:168] "Request Body" body=""
	I1213 10:33:51.476201  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:51.476471  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:51.976229  941476 type.go:168] "Request Body" body=""
	I1213 10:33:51.976307  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:51.976619  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:52.476357  941476 type.go:168] "Request Body" body=""
	I1213 10:33:52.476455  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:52.476789  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:52.976543  941476 type.go:168] "Request Body" body=""
	I1213 10:33:52.976619  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:52.976876  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:52.976919  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:53.476697  941476 type.go:168] "Request Body" body=""
	I1213 10:33:53.476776  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:53.477117  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:53.976881  941476 type.go:168] "Request Body" body=""
	I1213 10:33:53.976953  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:53.977282  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:54.476946  941476 type.go:168] "Request Body" body=""
	I1213 10:33:54.477041  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:54.477322  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:54.976021  941476 type.go:168] "Request Body" body=""
	I1213 10:33:54.976100  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:54.976426  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:55.476042  941476 type.go:168] "Request Body" body=""
	I1213 10:33:55.476124  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:55.476467  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:55.476548  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:55.976476  941476 type.go:168] "Request Body" body=""
	I1213 10:33:55.976544  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:55.976834  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:56.476663  941476 type.go:168] "Request Body" body=""
	I1213 10:33:56.476741  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:56.477071  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:56.975949  941476 type.go:168] "Request Body" body=""
	I1213 10:33:56.976040  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:56.976420  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:57.475988  941476 type.go:168] "Request Body" body=""
	I1213 10:33:57.476057  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:57.476315  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:57.976051  941476 type.go:168] "Request Body" body=""
	I1213 10:33:57.976129  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:57.976419  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:57.976467  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:58.476120  941476 type.go:168] "Request Body" body=""
	I1213 10:33:58.476204  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:58.476550  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:58.976099  941476 type.go:168] "Request Body" body=""
	I1213 10:33:58.976165  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:58.976418  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:59.476030  941476 type.go:168] "Request Body" body=""
	I1213 10:33:59.476110  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:59.476450  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:59.976134  941476 type.go:168] "Request Body" body=""
	I1213 10:33:59.976218  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:59.976654  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:59.976717  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:00.476365  941476 type.go:168] "Request Body" body=""
	I1213 10:34:00.476441  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:00.476723  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:00.976554  941476 type.go:168] "Request Body" body=""
	I1213 10:34:00.976626  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:00.976899  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:01.476692  941476 type.go:168] "Request Body" body=""
	I1213 10:34:01.476765  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:01.477095  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:01.976838  941476 type.go:168] "Request Body" body=""
	I1213 10:34:01.976916  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:01.977190  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:01.977235  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:02.475885  941476 type.go:168] "Request Body" body=""
	I1213 10:34:02.475972  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:02.476308  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:02.976032  941476 type.go:168] "Request Body" body=""
	I1213 10:34:02.976106  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:02.976439  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:03.476117  941476 type.go:168] "Request Body" body=""
	I1213 10:34:03.476185  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:03.476511  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:03.976078  941476 type.go:168] "Request Body" body=""
	I1213 10:34:03.976164  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:03.976510  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:04.476128  941476 type.go:168] "Request Body" body=""
	I1213 10:34:04.476208  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:04.476533  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:04.476591  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:04.975971  941476 type.go:168] "Request Body" body=""
	I1213 10:34:04.976047  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:04.976363  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:05.476003  941476 type.go:168] "Request Body" body=""
	I1213 10:34:05.476075  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:05.476405  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:05.976170  941476 type.go:168] "Request Body" body=""
	I1213 10:34:05.976243  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:05.976545  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:06.476105  941476 type.go:168] "Request Body" body=""
	I1213 10:34:06.476180  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:06.476517  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:06.976527  941476 type.go:168] "Request Body" body=""
	I1213 10:34:06.976615  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:06.976986  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:06.977056  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:07.476804  941476 type.go:168] "Request Body" body=""
	I1213 10:34:07.476894  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:07.477246  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:07.975926  941476 type.go:168] "Request Body" body=""
	I1213 10:34:07.975997  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:07.976254  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:08.476006  941476 type.go:168] "Request Body" body=""
	I1213 10:34:08.476102  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:08.476453  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:08.976173  941476 type.go:168] "Request Body" body=""
	I1213 10:34:08.976254  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:08.976538  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:09.476199  941476 type.go:168] "Request Body" body=""
	I1213 10:34:09.476277  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:09.476604  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:09.476655  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:09.976021  941476 type.go:168] "Request Body" body=""
	I1213 10:34:09.976097  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:09.976401  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:10.476158  941476 type.go:168] "Request Body" body=""
	I1213 10:34:10.476243  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:10.476583  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:10.975974  941476 type.go:168] "Request Body" body=""
	I1213 10:34:10.976050  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:10.976361  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:11.476040  941476 type.go:168] "Request Body" body=""
	I1213 10:34:11.476133  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:11.476485  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:11.976462  941476 type.go:168] "Request Body" body=""
	I1213 10:34:11.976535  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:11.976826  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:11.976874  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:12.476533  941476 type.go:168] "Request Body" body=""
	I1213 10:34:12.476615  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:12.476871  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:12.976701  941476 type.go:168] "Request Body" body=""
	I1213 10:34:12.976782  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:12.977103  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:13.476950  941476 type.go:168] "Request Body" body=""
	I1213 10:34:13.477040  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:13.477394  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:13.976070  941476 type.go:168] "Request Body" body=""
	I1213 10:34:13.976154  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:13.976430  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:14.476035  941476 type.go:168] "Request Body" body=""
	I1213 10:34:14.476112  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:14.476448  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:14.476506  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:14.976192  941476 type.go:168] "Request Body" body=""
	I1213 10:34:14.976290  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:14.976612  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:15.475988  941476 type.go:168] "Request Body" body=""
	I1213 10:34:15.476090  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:15.476371  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:15.975973  941476 type.go:168] "Request Body" body=""
	I1213 10:34:15.976053  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:15.976336  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:16.476054  941476 type.go:168] "Request Body" body=""
	I1213 10:34:16.476134  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:16.476457  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:16.976232  941476 type.go:168] "Request Body" body=""
	I1213 10:34:16.976307  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:16.976573  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:16.976615  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:17.476042  941476 type.go:168] "Request Body" body=""
	I1213 10:34:17.476118  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:17.476467  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:17.976182  941476 type.go:168] "Request Body" body=""
	I1213 10:34:17.976258  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:17.976609  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:18.476297  941476 type.go:168] "Request Body" body=""
	I1213 10:34:18.476413  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:18.476678  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:18.976046  941476 type.go:168] "Request Body" body=""
	I1213 10:34:18.976123  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:18.976446  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:19.476033  941476 type.go:168] "Request Body" body=""
	I1213 10:34:19.476117  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:19.476440  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:19.476499  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:19.976045  941476 type.go:168] "Request Body" body=""
	I1213 10:34:19.976129  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:19.976535  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:20.476021  941476 type.go:168] "Request Body" body=""
	I1213 10:34:20.476097  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:20.476428  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:20.976025  941476 type.go:168] "Request Body" body=""
	I1213 10:34:20.976107  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:20.976470  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:21.476149  941476 type.go:168] "Request Body" body=""
	I1213 10:34:21.476232  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:21.476535  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:21.476579  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:21.976488  941476 type.go:168] "Request Body" body=""
	I1213 10:34:21.976565  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:21.976917  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:22.476734  941476 type.go:168] "Request Body" body=""
	I1213 10:34:22.476814  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:22.477160  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:22.976929  941476 type.go:168] "Request Body" body=""
	I1213 10:34:22.977004  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:22.977264  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:23.475962  941476 type.go:168] "Request Body" body=""
	I1213 10:34:23.476043  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:23.476394  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:23.975992  941476 type.go:168] "Request Body" body=""
	I1213 10:34:23.976073  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:23.976409  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:23.976471  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:24.475994  941476 type.go:168] "Request Body" body=""
	I1213 10:34:24.476067  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:24.476343  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:24.976030  941476 type.go:168] "Request Body" body=""
	I1213 10:34:24.976113  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:24.976425  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:25.476120  941476 type.go:168] "Request Body" body=""
	I1213 10:34:25.476209  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:25.476597  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:25.976332  941476 type.go:168] "Request Body" body=""
	I1213 10:34:25.976407  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:25.976654  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:25.976698  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:26.476026  941476 type.go:168] "Request Body" body=""
	I1213 10:34:26.476112  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:26.476445  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:26.976015  941476 type.go:168] "Request Body" body=""
	I1213 10:34:26.976105  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:26.976489  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:27.476210  941476 type.go:168] "Request Body" body=""
	I1213 10:34:27.476283  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:27.476557  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:27.976225  941476 type.go:168] "Request Body" body=""
	I1213 10:34:27.976306  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:27.976615  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:28.476015  941476 type.go:168] "Request Body" body=""
	I1213 10:34:28.476091  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:28.476427  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:28.476486  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:28.976015  941476 type.go:168] "Request Body" body=""
	I1213 10:34:28.976082  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:28.976344  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:29.476028  941476 type.go:168] "Request Body" body=""
	I1213 10:34:29.476111  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:29.476510  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:29.976204  941476 type.go:168] "Request Body" body=""
	I1213 10:34:29.976284  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:29.976620  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:30.476184  941476 type.go:168] "Request Body" body=""
	I1213 10:34:30.476253  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:30.476523  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:30.476567  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:30.976028  941476 type.go:168] "Request Body" body=""
	I1213 10:34:30.976107  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:30.976466  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:31.476057  941476 type.go:168] "Request Body" body=""
	I1213 10:34:31.476134  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:31.476442  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:31.976251  941476 type.go:168] "Request Body" body=""
	I1213 10:34:31.976330  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:31.976592  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:32.476291  941476 type.go:168] "Request Body" body=""
	I1213 10:34:32.476378  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:32.476726  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:32.476797  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:32.976040  941476 type.go:168] "Request Body" body=""
	I1213 10:34:32.976118  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:32.976497  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:33.476816  941476 type.go:168] "Request Body" body=""
	I1213 10:34:33.476896  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:33.477256  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:33.975966  941476 type.go:168] "Request Body" body=""
	I1213 10:34:33.976050  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:33.976402  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:34.476115  941476 type.go:168] "Request Body" body=""
	I1213 10:34:34.476192  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:34.476542  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:34.976227  941476 type.go:168] "Request Body" body=""
	I1213 10:34:34.976305  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:34.976571  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:34.976613  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:35.476273  941476 type.go:168] "Request Body" body=""
	I1213 10:34:35.476350  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:35.476744  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:35.976574  941476 type.go:168] "Request Body" body=""
	I1213 10:34:35.976660  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:35.976987  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:36.476793  941476 type.go:168] "Request Body" body=""
	I1213 10:34:36.476879  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:36.477161  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:36.976010  941476 type.go:168] "Request Body" body=""
	I1213 10:34:36.976112  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:36.976494  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:37.476223  941476 type.go:168] "Request Body" body=""
	I1213 10:34:37.476305  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:37.476698  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:37.476756  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:37.976397  941476 type.go:168] "Request Body" body=""
	I1213 10:34:37.976468  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:37.976743  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:38.476018  941476 type.go:168] "Request Body" body=""
	I1213 10:34:38.476101  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:38.476460  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:38.976171  941476 type.go:168] "Request Body" body=""
	I1213 10:34:38.976253  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:38.976575  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:39.475986  941476 type.go:168] "Request Body" body=""
	I1213 10:34:39.476060  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:39.476387  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:39.976027  941476 type.go:168] "Request Body" body=""
	I1213 10:34:39.976100  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:39.976461  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:39.976525  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:40.476055  941476 type.go:168] "Request Body" body=""
	I1213 10:34:40.476137  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:40.476513  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:40.975990  941476 type.go:168] "Request Body" body=""
	I1213 10:34:40.976061  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:40.976330  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:41.476024  941476 type.go:168] "Request Body" body=""
	I1213 10:34:41.476103  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:41.476438  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:41.976017  941476 type.go:168] "Request Body" body=""
	I1213 10:34:41.976103  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:41.976445  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:42.476145  941476 type.go:168] "Request Body" body=""
	I1213 10:34:42.476214  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:42.476486  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:42.476531  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:42.976037  941476 type.go:168] "Request Body" body=""
	I1213 10:34:42.976110  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:42.976445  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:43.476157  941476 type.go:168] "Request Body" body=""
	I1213 10:34:43.476237  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:43.476565  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:43.975999  941476 type.go:168] "Request Body" body=""
	I1213 10:34:43.976386  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:43.976856  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:44.476021  941476 type.go:168] "Request Body" body=""
	I1213 10:34:44.476103  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:44.476481  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:44.476557  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:44.976279  941476 type.go:168] "Request Body" body=""
	I1213 10:34:44.976368  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:44.976729  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:45.476416  941476 type.go:168] "Request Body" body=""
	I1213 10:34:45.476491  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:45.476765  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:45.976781  941476 type.go:168] "Request Body" body=""
	I1213 10:34:45.976855  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:45.977208  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:46.475938  941476 type.go:168] "Request Body" body=""
	I1213 10:34:46.476018  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:46.476395  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:46.976009  941476 type.go:168] "Request Body" body=""
	I1213 10:34:46.976076  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:46.976328  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:46.976368  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:47.475981  941476 type.go:168] "Request Body" body=""
	I1213 10:34:47.476053  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:47.476668  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:47.976228  941476 type.go:168] "Request Body" body=""
	I1213 10:34:47.976301  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:47.976633  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:48.476320  941476 type.go:168] "Request Body" body=""
	I1213 10:34:48.476388  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:48.476671  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:48.976029  941476 type.go:168] "Request Body" body=""
	I1213 10:34:48.976104  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:48.976426  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:48.976483  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:49.476176  941476 type.go:168] "Request Body" body=""
	I1213 10:34:49.476256  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:49.476626  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:49.976322  941476 type.go:168] "Request Body" body=""
	I1213 10:34:49.976392  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:49.976656  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:50.476004  941476 type.go:168] "Request Body" body=""
	I1213 10:34:50.476079  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:50.476438  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:50.976173  941476 type.go:168] "Request Body" body=""
	I1213 10:34:50.976252  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:50.976562  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:50.976609  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:51.475985  941476 type.go:168] "Request Body" body=""
	I1213 10:34:51.476058  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:51.476380  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:51.976214  941476 type.go:168] "Request Body" body=""
	I1213 10:34:51.976292  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:51.976634  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:52.476361  941476 type.go:168] "Request Body" body=""
	I1213 10:34:52.476438  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:52.476777  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:52.976550  941476 type.go:168] "Request Body" body=""
	I1213 10:34:52.976620  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:52.976884  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:52.976928  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:53.476704  941476 type.go:168] "Request Body" body=""
	I1213 10:34:53.476789  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:53.477137  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:53.976929  941476 type.go:168] "Request Body" body=""
	I1213 10:34:53.977004  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:53.977333  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:54.476034  941476 type.go:168] "Request Body" body=""
	I1213 10:34:54.476106  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:54.476377  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:54.976023  941476 type.go:168] "Request Body" body=""
	I1213 10:34:54.976105  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:54.976454  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:55.476046  941476 type.go:168] "Request Body" body=""
	I1213 10:34:55.476127  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:55.476479  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:55.476535  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:55.976212  941476 type.go:168] "Request Body" body=""
	I1213 10:34:55.976283  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:55.976540  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:56.476033  941476 type.go:168] "Request Body" body=""
	I1213 10:34:56.476112  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:56.476472  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:56.976530  941476 type.go:168] "Request Body" body=""
	I1213 10:34:56.976612  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:56.977004  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:57.476807  941476 type.go:168] "Request Body" body=""
	I1213 10:34:57.476890  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:57.477154  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:57.477196  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:57.977018  941476 type.go:168] "Request Body" body=""
	I1213 10:34:57.977109  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:57.977446  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:58.476146  941476 type.go:168] "Request Body" body=""
	I1213 10:34:58.476225  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:58.476550  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:58.976270  941476 type.go:168] "Request Body" body=""
	I1213 10:34:58.976346  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:58.976611  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:59.476051  941476 type.go:168] "Request Body" body=""
	I1213 10:34:59.476143  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:59.476548  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:59.976128  941476 type.go:168] "Request Body" body=""
	I1213 10:34:59.976213  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:59.976516  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:59.976563  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:00.476032  941476 type.go:168] "Request Body" body=""
	I1213 10:35:00.476107  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:00.476542  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:00.976317  941476 type.go:168] "Request Body" body=""
	I1213 10:35:00.976411  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:00.976761  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:01.476609  941476 type.go:168] "Request Body" body=""
	I1213 10:35:01.476689  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:01.477045  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:01.976793  941476 type.go:168] "Request Body" body=""
	I1213 10:35:01.976872  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:01.977145  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:01.977189  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:02.476982  941476 type.go:168] "Request Body" body=""
	I1213 10:35:02.477061  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:02.477408  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:02.976099  941476 type.go:168] "Request Body" body=""
	I1213 10:35:02.976178  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:02.976550  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:03.476237  941476 type.go:168] "Request Body" body=""
	I1213 10:35:03.476319  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:03.476595  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:03.976292  941476 type.go:168] "Request Body" body=""
	I1213 10:35:03.976381  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:03.976725  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:04.476528  941476 type.go:168] "Request Body" body=""
	I1213 10:35:04.476603  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:04.476926  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:04.476983  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:04.976700  941476 type.go:168] "Request Body" body=""
	I1213 10:35:04.976771  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:04.977027  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:05.476835  941476 type.go:168] "Request Body" body=""
	I1213 10:35:05.476914  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:05.477258  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:05.976201  941476 type.go:168] "Request Body" body=""
	I1213 10:35:05.976279  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:05.976630  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:06.476362  941476 type.go:168] "Request Body" body=""
	I1213 10:35:06.476440  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:06.476705  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:06.976610  941476 type.go:168] "Request Body" body=""
	I1213 10:35:06.976688  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:06.977052  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:06.977113  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:07.476898  941476 type.go:168] "Request Body" body=""
	I1213 10:35:07.476978  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:07.477359  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:07.975992  941476 type.go:168] "Request Body" body=""
	I1213 10:35:07.976075  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:07.976399  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:08.476093  941476 type.go:168] "Request Body" body=""
	I1213 10:35:08.476179  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:08.476527  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:08.976239  941476 type.go:168] "Request Body" body=""
	I1213 10:35:08.976318  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:08.976631  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:09.475998  941476 type.go:168] "Request Body" body=""
	I1213 10:35:09.476070  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:09.476334  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:09.476377  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:09.976025  941476 type.go:168] "Request Body" body=""
	I1213 10:35:09.976103  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:09.976446  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:10.476153  941476 type.go:168] "Request Body" body=""
	I1213 10:35:10.476230  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:10.476565  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:10.976284  941476 type.go:168] "Request Body" body=""
	I1213 10:35:10.976359  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:10.976641  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:11.476331  941476 type.go:168] "Request Body" body=""
	I1213 10:35:11.476408  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:11.476754  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:11.476819  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:11.976620  941476 type.go:168] "Request Body" body=""
	I1213 10:35:11.976709  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:11.977042  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:12.476813  941476 type.go:168] "Request Body" body=""
	I1213 10:35:12.476885  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:12.477142  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:12.976929  941476 type.go:168] "Request Body" body=""
	I1213 10:35:12.977022  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:12.977398  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:13.476001  941476 type.go:168] "Request Body" body=""
	I1213 10:35:13.476080  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:13.476431  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:13.976122  941476 type.go:168] "Request Body" body=""
	I1213 10:35:13.976192  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:13.976457  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:13.976500  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:14.475989  941476 type.go:168] "Request Body" body=""
	I1213 10:35:14.476065  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:14.476409  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:14.976135  941476 type.go:168] "Request Body" body=""
	I1213 10:35:14.976241  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:14.976610  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:15.476299  941476 type.go:168] "Request Body" body=""
	I1213 10:35:15.476374  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:15.476636  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:15.976597  941476 type.go:168] "Request Body" body=""
	I1213 10:35:15.976678  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:15.977009  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:15.977062  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:16.476828  941476 type.go:168] "Request Body" body=""
	I1213 10:35:16.476909  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:16.477284  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:16.975983  941476 type.go:168] "Request Body" body=""
	I1213 10:35:16.976057  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:16.976412  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:17.476005  941476 type.go:168] "Request Body" body=""
	I1213 10:35:17.476082  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:17.476426  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:17.976147  941476 type.go:168] "Request Body" body=""
	I1213 10:35:17.976234  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:17.976566  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:18.476096  941476 type.go:168] "Request Body" body=""
	I1213 10:35:18.476172  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:18.476450  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:18.476495  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:18.976034  941476 type.go:168] "Request Body" body=""
	I1213 10:35:18.976113  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:18.976435  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:19.476137  941476 type.go:168] "Request Body" body=""
	I1213 10:35:19.476227  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:19.476564  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:19.976248  941476 type.go:168] "Request Body" body=""
	I1213 10:35:19.976327  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:19.976600  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:20.476028  941476 type.go:168] "Request Body" body=""
	I1213 10:35:20.476115  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:20.476474  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:20.476531  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:20.976196  941476 type.go:168] "Request Body" body=""
	I1213 10:35:20.976277  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:20.976613  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:21.476306  941476 type.go:168] "Request Body" body=""
	I1213 10:35:21.476385  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:21.476650  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:21.976568  941476 type.go:168] "Request Body" body=""
	I1213 10:35:21.976645  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:21.976977  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:22.476793  941476 type.go:168] "Request Body" body=""
	I1213 10:35:22.476870  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:22.477217  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:22.477279  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:22.975966  941476 type.go:168] "Request Body" body=""
	I1213 10:35:22.976040  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:22.976311  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:23.476036  941476 type.go:168] "Request Body" body=""
	I1213 10:35:23.476125  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:23.476480  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:23.976070  941476 type.go:168] "Request Body" body=""
	I1213 10:35:23.976153  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:23.976505  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:24.476197  941476 type.go:168] "Request Body" body=""
	I1213 10:35:24.476265  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:24.476534  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:24.976215  941476 type.go:168] "Request Body" body=""
	I1213 10:35:24.976288  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:24.976630  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:24.976686  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:25.476352  941476 type.go:168] "Request Body" body=""
	I1213 10:35:25.476428  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:25.476773  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:25.976631  941476 type.go:168] "Request Body" body=""
	I1213 10:35:25.976701  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:25.976974  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:26.476849  941476 type.go:168] "Request Body" body=""
	I1213 10:35:26.476924  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:26.477262  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:26.976050  941476 type.go:168] "Request Body" body=""
	I1213 10:35:26.976131  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:26.976463  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:27.475977  941476 type.go:168] "Request Body" body=""
	I1213 10:35:27.476053  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:27.476355  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:27.476414  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:27.975991  941476 type.go:168] "Request Body" body=""
	I1213 10:35:27.976070  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:27.976388  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:28.476128  941476 type.go:168] "Request Body" body=""
	I1213 10:35:28.476210  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:28.476540  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:28.975989  941476 type.go:168] "Request Body" body=""
	I1213 10:35:28.976066  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:28.976327  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:29.476011  941476 type.go:168] "Request Body" body=""
	I1213 10:35:29.476091  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:29.476427  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:29.476488  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:29.976030  941476 type.go:168] "Request Body" body=""
	I1213 10:35:29.976112  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:29.976433  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:30.476132  941476 type.go:168] "Request Body" body=""
	I1213 10:35:30.476208  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:30.476494  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:30.976178  941476 type.go:168] "Request Body" body=""
	I1213 10:35:30.976260  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:30.976576  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:31.476298  941476 type.go:168] "Request Body" body=""
	I1213 10:35:31.476371  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:31.476716  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:31.476774  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:31.976573  941476 type.go:168] "Request Body" body=""
	I1213 10:35:31.976645  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:31.976917  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:32.476713  941476 type.go:168] "Request Body" body=""
	I1213 10:35:32.476790  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:32.477195  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:32.975947  941476 type.go:168] "Request Body" body=""
	I1213 10:35:32.976021  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:32.976319  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:33.476001  941476 type.go:168] "Request Body" body=""
	I1213 10:35:33.476069  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:33.476324  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:33.976034  941476 type.go:168] "Request Body" body=""
	I1213 10:35:33.976113  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:33.976453  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:33.976512  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:34.476185  941476 type.go:168] "Request Body" body=""
	I1213 10:35:34.476263  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:34.476596  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:34.975980  941476 type.go:168] "Request Body" body=""
	I1213 10:35:34.976061  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:34.976361  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:35.476014  941476 type.go:168] "Request Body" body=""
	I1213 10:35:35.476110  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:35.476451  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:35.976934  941476 type.go:168] "Request Body" body=""
	I1213 10:35:35.977011  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:35.977366  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:35.977428  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:36.476062  941476 type.go:168] "Request Body" body=""
	I1213 10:35:36.476139  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:36.476417  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:36.976261  941476 type.go:168] "Request Body" body=""
	I1213 10:35:36.976334  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:36.976678  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:37.476392  941476 type.go:168] "Request Body" body=""
	I1213 10:35:37.476480  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:37.476822  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:37.976607  941476 type.go:168] "Request Body" body=""
	I1213 10:35:37.976691  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:37.976956  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:38.476714  941476 type.go:168] "Request Body" body=""
	I1213 10:35:38.476786  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:38.477099  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:38.477160  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:38.976964  941476 type.go:168] "Request Body" body=""
	I1213 10:35:38.977048  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:38.977472  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:39.476025  941476 type.go:168] "Request Body" body=""
	I1213 10:35:39.476097  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:39.476371  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:39.976024  941476 type.go:168] "Request Body" body=""
	I1213 10:35:39.976107  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:39.976494  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:40.476166  941476 type.go:168] "Request Body" body=""
	I1213 10:35:40.476250  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:40.476607  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:40.975981  941476 type.go:168] "Request Body" body=""
	I1213 10:35:40.976060  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:40.976331  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:40.976379  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:41.476023  941476 type.go:168] "Request Body" body=""
	I1213 10:35:41.476108  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:41.476426  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:41.976057  941476 type.go:168] "Request Body" body=""
	I1213 10:35:41.976134  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:41.976442  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:42.475983  941476 type.go:168] "Request Body" body=""
	I1213 10:35:42.476061  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:42.480099  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1213 10:35:42.976928  941476 type.go:168] "Request Body" body=""
	I1213 10:35:42.977007  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:42.977373  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:42.977438  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:43.476024  941476 type.go:168] "Request Body" body=""
	I1213 10:35:43.476136  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:43.476497  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:43.976064  941476 type.go:168] "Request Body" body=""
	I1213 10:35:43.976139  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:43.976405  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:44.476012  941476 type.go:168] "Request Body" body=""
	I1213 10:35:44.476110  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:44.476457  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:44.976181  941476 type.go:168] "Request Body" body=""
	I1213 10:35:44.976260  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:44.976576  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:45.476263  941476 type.go:168] "Request Body" body=""
	I1213 10:35:45.476338  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:45.476639  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:45.476717  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:45.976693  941476 type.go:168] "Request Body" body=""
	I1213 10:35:45.976776  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:45.977113  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:46.476938  941476 type.go:168] "Request Body" body=""
	I1213 10:35:46.477014  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:46.477384  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:46.975991  941476 type.go:168] "Request Body" body=""
	I1213 10:35:46.976091  941476 node_ready.go:38] duration metric: took 6m0.000294728s for node "functional-200955" to be "Ready" ...
	I1213 10:35:46.979089  941476 out.go:203] 
	W1213 10:35:46.981875  941476 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1213 10:35:46.981899  941476 out.go:285] * 
	W1213 10:35:46.984058  941476 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 10:35:46.987297  941476 out.go:203] 
	
	
	==> CRI-O <==
	Dec 13 10:29:44 functional-200955 crio[5381]: time="2025-12-13T10:29:44.591623981Z" level=info msg="Using the internal default seccomp profile"
	Dec 13 10:29:44 functional-200955 crio[5381]: time="2025-12-13T10:29:44.59170097Z" level=info msg="AppArmor is disabled by the system or at CRI-O build-time"
	Dec 13 10:29:44 functional-200955 crio[5381]: time="2025-12-13T10:29:44.591755928Z" level=info msg="No blockio config file specified, blockio not configured"
	Dec 13 10:29:44 functional-200955 crio[5381]: time="2025-12-13T10:29:44.591808548Z" level=info msg="RDT not available in the host system"
	Dec 13 10:29:44 functional-200955 crio[5381]: time="2025-12-13T10:29:44.591883757Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 13 10:29:44 functional-200955 crio[5381]: time="2025-12-13T10:29:44.592750315Z" level=info msg="Conmon does support the --sync option"
	Dec 13 10:29:44 functional-200955 crio[5381]: time="2025-12-13T10:29:44.592870521Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 13 10:29:44 functional-200955 crio[5381]: time="2025-12-13T10:29:44.592951095Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 13 10:29:44 functional-200955 crio[5381]: time="2025-12-13T10:29:44.593821723Z" level=info msg="Conmon does support the --sync option"
	Dec 13 10:29:44 functional-200955 crio[5381]: time="2025-12-13T10:29:44.593926823Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 13 10:29:44 functional-200955 crio[5381]: time="2025-12-13T10:29:44.594167883Z" level=info msg="Updated default CNI network name to "
	Dec 13 10:29:44 functional-200955 crio[5381]: time="2025-12-13T10:29:44.595116846Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oc
i/hooks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"cgroupfs\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n
uid_mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_
memory = \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    namespaced_auth_dir = \"/etc/crio/auth\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_d
ir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [c
rio.nri]\n    enable_nri = true\n    nri_listen = \"/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Dec 13 10:29:44 functional-200955 crio[5381]: time="2025-12-13T10:29:44.595836842Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Dec 13 10:29:44 functional-200955 crio[5381]: time="2025-12-13T10:29:44.595984519Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Dec 13 10:29:44 functional-200955 crio[5381]: time="2025-12-13T10:29:44.650542947Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 13 10:29:44 functional-200955 crio[5381]: time="2025-12-13T10:29:44.650722263Z" level=info msg="Starting seccomp notifier watcher"
	Dec 13 10:29:44 functional-200955 crio[5381]: time="2025-12-13T10:29:44.65083465Z" level=info msg="Create NRI interface"
	Dec 13 10:29:44 functional-200955 crio[5381]: time="2025-12-13T10:29:44.65094615Z" level=info msg="built-in NRI default validator is disabled"
	Dec 13 10:29:44 functional-200955 crio[5381]: time="2025-12-13T10:29:44.650963733Z" level=info msg="runtime interface created"
	Dec 13 10:29:44 functional-200955 crio[5381]: time="2025-12-13T10:29:44.650979553Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 13 10:29:44 functional-200955 crio[5381]: time="2025-12-13T10:29:44.650986076Z" level=info msg="runtime interface starting up..."
	Dec 13 10:29:44 functional-200955 crio[5381]: time="2025-12-13T10:29:44.65099342Z" level=info msg="starting plugins..."
	Dec 13 10:29:44 functional-200955 crio[5381]: time="2025-12-13T10:29:44.651007959Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 13 10:29:44 functional-200955 crio[5381]: time="2025-12-13T10:29:44.651079788Z" level=info msg="No systemd watchdog enabled"
	Dec 13 10:29:44 functional-200955 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:35:51.495449    8713 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:35:51.500860    8713 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:35:51.501514    8713 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:35:51.502684    8713 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:35:51.503430    8713 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec13 10:10] kauditd_printk_skb: 8 callbacks suppressed
	[Dec13 10:11] overlayfs: idmapped layers are currently not supported
	[  +0.076161] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec13 10:17] overlayfs: idmapped layers are currently not supported
	[Dec13 10:18] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 10:35:51 up  5:18,  0 user,  load average: 0.17, 0.32, 0.87
	Linux functional-200955 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 10:35:49 functional-200955 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:35:49 functional-200955 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1140.
	Dec 13 10:35:49 functional-200955 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:35:49 functional-200955 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:35:50 functional-200955 kubelet[8606]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 10:35:50 functional-200955 kubelet[8606]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 10:35:50 functional-200955 kubelet[8606]: E1213 10:35:50.030629    8606 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:35:50 functional-200955 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:35:50 functional-200955 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:35:50 functional-200955 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1141.
	Dec 13 10:35:50 functional-200955 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:35:50 functional-200955 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:35:50 functional-200955 kubelet[8629]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 10:35:50 functional-200955 kubelet[8629]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 10:35:50 functional-200955 kubelet[8629]: E1213 10:35:50.812097    8629 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:35:50 functional-200955 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:35:50 functional-200955 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:35:51 functional-200955 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1142.
	Dec 13 10:35:51 functional-200955 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:35:51 functional-200955 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:35:51 functional-200955 kubelet[8717]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 10:35:51 functional-200955 kubelet[8717]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 10:35:51 functional-200955 kubelet[8717]: E1213 10:35:51.540426    8717 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:35:51 functional-200955 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:35:51 functional-200955 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-200955 -n functional-200955
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-200955 -n functional-200955: exit status 2 (384.546535ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-200955" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (2.52s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (2.82s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-200955 kubectl -- --context functional-200955 get pods
functional_test.go:731: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-200955 kubectl -- --context functional-200955 get pods: exit status 1 (130.818334ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:734: failed to get pods. args "out/minikube-linux-arm64 -p functional-200955 kubectl -- --context functional-200955 get pods": exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-200955
helpers_test.go:244: (dbg) docker inspect functional-200955:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8d53cd00da87a9082532fc43ffe108cc46c100c4d52d598de1abc5c03d05f0a2",
	        "Created": "2025-12-13T10:21:24.063231347Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 935996,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T10:21:24.120776444Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/8d53cd00da87a9082532fc43ffe108cc46c100c4d52d598de1abc5c03d05f0a2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8d53cd00da87a9082532fc43ffe108cc46c100c4d52d598de1abc5c03d05f0a2/hostname",
	        "HostsPath": "/var/lib/docker/containers/8d53cd00da87a9082532fc43ffe108cc46c100c4d52d598de1abc5c03d05f0a2/hosts",
	        "LogPath": "/var/lib/docker/containers/8d53cd00da87a9082532fc43ffe108cc46c100c4d52d598de1abc5c03d05f0a2/8d53cd00da87a9082532fc43ffe108cc46c100c4d52d598de1abc5c03d05f0a2-json.log",
	        "Name": "/functional-200955",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-200955:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-200955",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8d53cd00da87a9082532fc43ffe108cc46c100c4d52d598de1abc5c03d05f0a2",
	                "LowerDir": "/var/lib/docker/overlay2/88eeb10fcc1b499e31bc1f6077feaba1c1c5b1a510066e3c5f3c7fab1032a7a8-init/diff:/var/lib/docker/overlay2/ae644fe0cc2841f5eea1cee1fab5fa62406b5368ff2c4f1e7ca42815e94a37ad/diff",
	                "MergedDir": "/var/lib/docker/overlay2/88eeb10fcc1b499e31bc1f6077feaba1c1c5b1a510066e3c5f3c7fab1032a7a8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/88eeb10fcc1b499e31bc1f6077feaba1c1c5b1a510066e3c5f3c7fab1032a7a8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/88eeb10fcc1b499e31bc1f6077feaba1c1c5b1a510066e3c5f3c7fab1032a7a8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-200955",
	                "Source": "/var/lib/docker/volumes/functional-200955/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-200955",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-200955",
	                "name.minikube.sigs.k8s.io": "functional-200955",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "766cddaf684c9eda3444b59c94594c94772112ec8d9beb3bf9ab0dee27a031f7",
	            "SandboxKey": "/var/run/docker/netns/766cddaf684c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33523"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33524"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33527"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33525"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33526"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-200955": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "26:41:8f:b5:13:ba",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "cc1684d1fcbfd40cf35af7d1687322fe1e1f6c4d0d51bbc510daab317bab57d4",
	                    "EndpointID": "480d7cd674d03dbe8a8b029c866cc993844939c5b39aa63c9b0d9188a61c29a3",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-200955",
	                        "8d53cd00da87"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-200955 -n functional-200955
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-200955 -n functional-200955: exit status 2 (503.759766ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-200955 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p functional-200955 logs -n 25: (1.236914403s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                       ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-769798 image ls --format json --alsologtostderr                                                                                        │ functional-769798 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │ 13 Dec 25 10:21 UTC │
	│ image          │ functional-769798 image build -t localhost/my-image:functional-769798 testdata/build --alsologtostderr                                            │ functional-769798 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │ 13 Dec 25 10:21 UTC │
	│ image          │ functional-769798 image ls --format table --alsologtostderr                                                                                       │ functional-769798 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │ 13 Dec 25 10:21 UTC │
	│ update-context │ functional-769798 update-context --alsologtostderr -v=2                                                                                           │ functional-769798 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │ 13 Dec 25 10:21 UTC │
	│ update-context │ functional-769798 update-context --alsologtostderr -v=2                                                                                           │ functional-769798 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │ 13 Dec 25 10:21 UTC │
	│ update-context │ functional-769798 update-context --alsologtostderr -v=2                                                                                           │ functional-769798 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │ 13 Dec 25 10:21 UTC │
	│ image          │ functional-769798 image ls                                                                                                                        │ functional-769798 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │ 13 Dec 25 10:21 UTC │
	│ delete         │ -p functional-769798                                                                                                                              │ functional-769798 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │ 13 Dec 25 10:21 UTC │
	│ start          │ -p functional-200955 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │                     │
	│ start          │ -p functional-200955 --alsologtostderr -v=8                                                                                                       │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:29 UTC │                     │
	│ cache          │ functional-200955 cache add registry.k8s.io/pause:3.1                                                                                             │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:35 UTC │ 13 Dec 25 10:35 UTC │
	│ cache          │ functional-200955 cache add registry.k8s.io/pause:3.3                                                                                             │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:35 UTC │ 13 Dec 25 10:35 UTC │
	│ cache          │ functional-200955 cache add registry.k8s.io/pause:latest                                                                                          │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:35 UTC │ 13 Dec 25 10:35 UTC │
	│ cache          │ functional-200955 cache add minikube-local-cache-test:functional-200955                                                                           │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:35 UTC │ 13 Dec 25 10:35 UTC │
	│ cache          │ functional-200955 cache delete minikube-local-cache-test:functional-200955                                                                        │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:35 UTC │ 13 Dec 25 10:35 UTC │
	│ cache          │ delete registry.k8s.io/pause:3.3                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 10:35 UTC │ 13 Dec 25 10:35 UTC │
	│ cache          │ list                                                                                                                                              │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 10:35 UTC │ 13 Dec 25 10:35 UTC │
	│ ssh            │ functional-200955 ssh sudo crictl images                                                                                                          │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:35 UTC │ 13 Dec 25 10:35 UTC │
	│ ssh            │ functional-200955 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:35 UTC │ 13 Dec 25 10:35 UTC │
	│ ssh            │ functional-200955 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:35 UTC │                     │
	│ cache          │ functional-200955 cache reload                                                                                                                    │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:35 UTC │ 13 Dec 25 10:35 UTC │
	│ ssh            │ functional-200955 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:35 UTC │ 13 Dec 25 10:35 UTC │
	│ cache          │ delete registry.k8s.io/pause:3.1                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 10:35 UTC │ 13 Dec 25 10:35 UTC │
	│ cache          │ delete registry.k8s.io/pause:latest                                                                                                               │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 10:35 UTC │ 13 Dec 25 10:35 UTC │
	│ kubectl        │ functional-200955 kubectl -- --context functional-200955 get pods                                                                                 │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:35 UTC │                     │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 10:29:41
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 10:29:41.597851  941476 out.go:360] Setting OutFile to fd 1 ...
	I1213 10:29:41.597968  941476 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:29:41.597980  941476 out.go:374] Setting ErrFile to fd 2...
	I1213 10:29:41.597985  941476 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:29:41.598264  941476 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-904040/.minikube/bin
	I1213 10:29:41.598640  941476 out.go:368] Setting JSON to false
	I1213 10:29:41.599496  941476 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":18731,"bootTime":1765603051,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1213 10:29:41.599570  941476 start.go:143] virtualization:  
	I1213 10:29:41.603284  941476 out.go:179] * [functional-200955] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 10:29:41.606132  941476 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 10:29:41.606240  941476 notify.go:221] Checking for updates...
	I1213 10:29:41.611909  941476 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 10:29:41.614766  941476 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22128-904040/kubeconfig
	I1213 10:29:41.617588  941476 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-904040/.minikube
	I1213 10:29:41.620495  941476 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 10:29:41.623575  941476 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 10:29:41.626951  941476 config.go:182] Loaded profile config "functional-200955": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 10:29:41.627063  941476 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 10:29:41.660528  941476 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 10:29:41.660648  941476 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:29:41.716071  941476 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 10:29:41.706597811 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:29:41.716181  941476 docker.go:319] overlay module found
	I1213 10:29:41.719241  941476 out.go:179] * Using the docker driver based on existing profile
	I1213 10:29:41.721997  941476 start.go:309] selected driver: docker
	I1213 10:29:41.722027  941476 start.go:927] validating driver "docker" against &{Name:functional-200955 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-200955 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:29:41.722127  941476 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 10:29:41.722252  941476 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:29:41.778165  941476 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 10:29:41.768783539 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:29:41.778600  941476 cni.go:84] Creating CNI manager for ""
	I1213 10:29:41.778650  941476 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 10:29:41.778703  941476 start.go:353] cluster config:
	{Name:functional-200955 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-200955 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:29:41.781806  941476 out.go:179] * Starting "functional-200955" primary control-plane node in "functional-200955" cluster
	I1213 10:29:41.784501  941476 cache.go:134] Beginning downloading kic base image for docker with crio
	I1213 10:29:41.787625  941476 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 10:29:41.790577  941476 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 10:29:41.790637  941476 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22128-904040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1213 10:29:41.790650  941476 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 10:29:41.790656  941476 cache.go:65] Caching tarball of preloaded images
	I1213 10:29:41.790739  941476 preload.go:238] Found /home/jenkins/minikube-integration/22128-904040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1213 10:29:41.790750  941476 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1213 10:29:41.790859  941476 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/config.json ...
	I1213 10:29:41.809947  941476 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 10:29:41.809969  941476 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 10:29:41.809989  941476 cache.go:243] Successfully downloaded all kic artifacts
	I1213 10:29:41.810023  941476 start.go:360] acquireMachinesLock for functional-200955: {Name:mkc5e96275d9db4dc69c44a1e3c60b6575a1e73a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 10:29:41.810091  941476 start.go:364] duration metric: took 45.924µs to acquireMachinesLock for "functional-200955"
	I1213 10:29:41.810115  941476 start.go:96] Skipping create...Using existing machine configuration
	I1213 10:29:41.810124  941476 fix.go:54] fixHost starting: 
	I1213 10:29:41.810397  941476 cli_runner.go:164] Run: docker container inspect functional-200955 --format={{.State.Status}}
	I1213 10:29:41.827321  941476 fix.go:112] recreateIfNeeded on functional-200955: state=Running err=<nil>
	W1213 10:29:41.827351  941476 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 10:29:41.830448  941476 out.go:252] * Updating the running docker "functional-200955" container ...
	I1213 10:29:41.830480  941476 machine.go:94] provisionDockerMachine start ...
	I1213 10:29:41.830562  941476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-200955
	I1213 10:29:41.846863  941476 main.go:143] libmachine: Using SSH client type: native
	I1213 10:29:41.847197  941476 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33523 <nil> <nil>}
	I1213 10:29:41.847214  941476 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 10:29:41.996943  941476 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-200955
	
	I1213 10:29:41.996971  941476 ubuntu.go:182] provisioning hostname "functional-200955"
	I1213 10:29:41.997042  941476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-200955
	I1213 10:29:42.018825  941476 main.go:143] libmachine: Using SSH client type: native
	I1213 10:29:42.019169  941476 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33523 <nil> <nil>}
	I1213 10:29:42.019192  941476 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-200955 && echo "functional-200955" | sudo tee /etc/hostname
	I1213 10:29:42.186347  941476 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-200955
	
	I1213 10:29:42.186459  941476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-200955
	I1213 10:29:42.209314  941476 main.go:143] libmachine: Using SSH client type: native
	I1213 10:29:42.209694  941476 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33523 <nil> <nil>}
	I1213 10:29:42.209712  941476 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-200955' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-200955/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-200955' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 10:29:42.370026  941476 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 10:29:42.370125  941476 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22128-904040/.minikube CaCertPath:/home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22128-904040/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22128-904040/.minikube}
	I1213 10:29:42.370174  941476 ubuntu.go:190] setting up certificates
	I1213 10:29:42.370200  941476 provision.go:84] configureAuth start
	I1213 10:29:42.370268  941476 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-200955
	I1213 10:29:42.388638  941476 provision.go:143] copyHostCerts
	I1213 10:29:42.388684  941476 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22128-904040/.minikube/ca.pem
	I1213 10:29:42.388728  941476 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-904040/.minikube/ca.pem, removing ...
	I1213 10:29:42.388739  941476 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-904040/.minikube/ca.pem
	I1213 10:29:42.388819  941476 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22128-904040/.minikube/ca.pem (1082 bytes)
	I1213 10:29:42.388924  941476 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22128-904040/.minikube/cert.pem
	I1213 10:29:42.388947  941476 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-904040/.minikube/cert.pem, removing ...
	I1213 10:29:42.388956  941476 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-904040/.minikube/cert.pem
	I1213 10:29:42.388985  941476 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22128-904040/.minikube/cert.pem (1123 bytes)
	I1213 10:29:42.389034  941476 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22128-904040/.minikube/key.pem
	I1213 10:29:42.389056  941476 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-904040/.minikube/key.pem, removing ...
	I1213 10:29:42.389064  941476 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-904040/.minikube/key.pem
	I1213 10:29:42.389093  941476 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22128-904040/.minikube/key.pem (1675 bytes)
	I1213 10:29:42.389148  941476 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22128-904040/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca-key.pem org=jenkins.functional-200955 san=[127.0.0.1 192.168.49.2 functional-200955 localhost minikube]
	I1213 10:29:42.553052  941476 provision.go:177] copyRemoteCerts
	I1213 10:29:42.553125  941476 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 10:29:42.553174  941476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-200955
	I1213 10:29:42.571937  941476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33523 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/functional-200955/id_rsa Username:docker}
	I1213 10:29:42.681380  941476 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1213 10:29:42.681440  941476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 10:29:42.698297  941476 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-904040/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1213 10:29:42.698381  941476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 10:29:42.715245  941476 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-904040/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1213 10:29:42.715360  941476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 10:29:42.732152  941476 provision.go:87] duration metric: took 361.926272ms to configureAuth
	I1213 10:29:42.732184  941476 ubuntu.go:206] setting minikube options for container-runtime
	I1213 10:29:42.732358  941476 config.go:182] Loaded profile config "functional-200955": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 10:29:42.732458  941476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-200955
	I1213 10:29:42.749290  941476 main.go:143] libmachine: Using SSH client type: native
	I1213 10:29:42.749620  941476 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33523 <nil> <nil>}
	I1213 10:29:42.749643  941476 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 10:29:43.093593  941476 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 10:29:43.093619  941476 machine.go:97] duration metric: took 1.263130563s to provisionDockerMachine
	I1213 10:29:43.093630  941476 start.go:293] postStartSetup for "functional-200955" (driver="docker")
	I1213 10:29:43.093643  941476 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 10:29:43.093703  941476 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 10:29:43.093752  941476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-200955
	I1213 10:29:43.110551  941476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33523 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/functional-200955/id_rsa Username:docker}
	I1213 10:29:43.213067  941476 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 10:29:43.216076  941476 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1213 10:29:43.216096  941476 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1213 10:29:43.216102  941476 command_runner.go:130] > VERSION_ID="12"
	I1213 10:29:43.216108  941476 command_runner.go:130] > VERSION="12 (bookworm)"
	I1213 10:29:43.216112  941476 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1213 10:29:43.216116  941476 command_runner.go:130] > ID=debian
	I1213 10:29:43.216121  941476 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1213 10:29:43.216125  941476 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1213 10:29:43.216147  941476 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1213 10:29:43.216196  941476 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 10:29:43.216219  941476 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 10:29:43.216231  941476 filesync.go:126] Scanning /home/jenkins/minikube-integration/22128-904040/.minikube/addons for local assets ...
	I1213 10:29:43.216286  941476 filesync.go:126] Scanning /home/jenkins/minikube-integration/22128-904040/.minikube/files for local assets ...
	I1213 10:29:43.216365  941476 filesync.go:149] local asset: /home/jenkins/minikube-integration/22128-904040/.minikube/files/etc/ssl/certs/9074842.pem -> 9074842.pem in /etc/ssl/certs
	I1213 10:29:43.216375  941476 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-904040/.minikube/files/etc/ssl/certs/9074842.pem -> /etc/ssl/certs/9074842.pem
	I1213 10:29:43.216452  941476 filesync.go:149] local asset: /home/jenkins/minikube-integration/22128-904040/.minikube/files/etc/test/nested/copy/907484/hosts -> hosts in /etc/test/nested/copy/907484
	I1213 10:29:43.216461  941476 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-904040/.minikube/files/etc/test/nested/copy/907484/hosts -> /etc/test/nested/copy/907484/hosts
	I1213 10:29:43.216512  941476 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/907484
	I1213 10:29:43.223706  941476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/files/etc/ssl/certs/9074842.pem --> /etc/ssl/certs/9074842.pem (1708 bytes)
	I1213 10:29:43.242619  941476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/files/etc/test/nested/copy/907484/hosts --> /etc/test/nested/copy/907484/hosts (40 bytes)
	I1213 10:29:43.261652  941476 start.go:296] duration metric: took 168.007176ms for postStartSetup
	I1213 10:29:43.261748  941476 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 10:29:43.261797  941476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-200955
	I1213 10:29:43.278068  941476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33523 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/functional-200955/id_rsa Username:docker}
	I1213 10:29:43.377852  941476 command_runner.go:130] > 19%
	I1213 10:29:43.378272  941476 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 10:29:43.382521  941476 command_runner.go:130] > 159G
	I1213 10:29:43.382892  941476 fix.go:56] duration metric: took 1.572759496s for fixHost
	I1213 10:29:43.382913  941476 start.go:83] releasing machines lock for "functional-200955", held for 1.572809064s
	I1213 10:29:43.382984  941476 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-200955
	I1213 10:29:43.399315  941476 ssh_runner.go:195] Run: cat /version.json
	I1213 10:29:43.399334  941476 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 10:29:43.399371  941476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-200955
	I1213 10:29:43.399397  941476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-200955
	I1213 10:29:43.423081  941476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33523 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/functional-200955/id_rsa Username:docker}
	I1213 10:29:43.424445  941476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33523 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/functional-200955/id_rsa Username:docker}
	I1213 10:29:43.612877  941476 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1213 10:29:43.615557  941476 command_runner.go:130] > {"iso_version": "v1.37.0-1765151505-21409", "kicbase_version": "v0.0.48-1765275396-22083", "minikube_version": "v1.37.0", "commit": "9f3959633d311997d75aab86f8ff840f224c6486"}
	I1213 10:29:43.615725  941476 ssh_runner.go:195] Run: systemctl --version
	I1213 10:29:43.621711  941476 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1213 10:29:43.621746  941476 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1213 10:29:43.622124  941476 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 10:29:43.667216  941476 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1213 10:29:43.671902  941476 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1213 10:29:43.672160  941476 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 10:29:43.672241  941476 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 10:29:43.679969  941476 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 10:29:43.679994  941476 start.go:496] detecting cgroup driver to use...
	I1213 10:29:43.680025  941476 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 10:29:43.680082  941476 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 10:29:43.694816  941476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 10:29:43.708840  941476 docker.go:218] disabling cri-docker service (if available) ...
	I1213 10:29:43.708902  941476 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 10:29:43.727390  941476 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 10:29:43.741194  941476 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 10:29:43.853170  941476 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 10:29:43.965117  941476 docker.go:234] disabling docker service ...
	I1213 10:29:43.965193  941476 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 10:29:43.981069  941476 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 10:29:43.993651  941476 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 10:29:44.106510  941476 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 10:29:44.230950  941476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 10:29:44.243823  941476 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 10:29:44.258241  941476 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1213 10:29:44.259524  941476 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 10:29:44.259625  941476 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:29:44.267965  941476 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 10:29:44.268046  941476 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:29:44.277059  941476 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:29:44.285643  941476 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:29:44.295522  941476 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 10:29:44.303650  941476 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:29:44.312274  941476 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:29:44.320905  941476 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:29:44.329531  941476 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 10:29:44.336129  941476 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1213 10:29:44.337017  941476 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 10:29:44.344665  941476 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:29:44.479199  941476 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 10:29:44.656815  941476 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 10:29:44.656943  941476 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 10:29:44.660542  941476 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1213 10:29:44.660573  941476 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1213 10:29:44.660581  941476 command_runner.go:130] > Device: 0,72	Inode: 1640        Links: 1
	I1213 10:29:44.660588  941476 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1213 10:29:44.660594  941476 command_runner.go:130] > Access: 2025-12-13 10:29:44.589977594 +0000
	I1213 10:29:44.660602  941476 command_runner.go:130] > Modify: 2025-12-13 10:29:44.589977594 +0000
	I1213 10:29:44.660608  941476 command_runner.go:130] > Change: 2025-12-13 10:29:44.589977594 +0000
	I1213 10:29:44.660615  941476 command_runner.go:130] >  Birth: -
	I1213 10:29:44.660643  941476 start.go:564] Will wait 60s for crictl version
	I1213 10:29:44.660697  941476 ssh_runner.go:195] Run: which crictl
	I1213 10:29:44.664032  941476 command_runner.go:130] > /usr/local/bin/crictl
	I1213 10:29:44.664157  941476 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 10:29:44.686934  941476 command_runner.go:130] > Version:  0.1.0
	I1213 10:29:44.686958  941476 command_runner.go:130] > RuntimeName:  cri-o
	I1213 10:29:44.686965  941476 command_runner.go:130] > RuntimeVersion:  1.34.3
	I1213 10:29:44.686970  941476 command_runner.go:130] > RuntimeApiVersion:  v1
	I1213 10:29:44.687007  941476 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1213 10:29:44.687101  941476 ssh_runner.go:195] Run: crio --version
	I1213 10:29:44.715374  941476 command_runner.go:130] > crio version 1.34.3
	I1213 10:29:44.715400  941476 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1213 10:29:44.715407  941476 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1213 10:29:44.715412  941476 command_runner.go:130] >    GitTreeState:   dirty
	I1213 10:29:44.715417  941476 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1213 10:29:44.715422  941476 command_runner.go:130] >    GoVersion:      go1.24.6
	I1213 10:29:44.715435  941476 command_runner.go:130] >    Compiler:       gc
	I1213 10:29:44.715442  941476 command_runner.go:130] >    Platform:       linux/arm64
	I1213 10:29:44.715446  941476 command_runner.go:130] >    Linkmode:       static
	I1213 10:29:44.715453  941476 command_runner.go:130] >    BuildTags:
	I1213 10:29:44.715457  941476 command_runner.go:130] >      static
	I1213 10:29:44.715461  941476 command_runner.go:130] >      netgo
	I1213 10:29:44.715464  941476 command_runner.go:130] >      osusergo
	I1213 10:29:44.715476  941476 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1213 10:29:44.715480  941476 command_runner.go:130] >      seccomp
	I1213 10:29:44.715484  941476 command_runner.go:130] >      apparmor
	I1213 10:29:44.715492  941476 command_runner.go:130] >      selinux
	I1213 10:29:44.715496  941476 command_runner.go:130] >    LDFlags:          unknown
	I1213 10:29:44.715504  941476 command_runner.go:130] >    SeccompEnabled:   true
	I1213 10:29:44.715508  941476 command_runner.go:130] >    AppArmorEnabled:  false
	I1213 10:29:44.717596  941476 ssh_runner.go:195] Run: crio --version
	I1213 10:29:44.744267  941476 command_runner.go:130] > crio version 1.34.3
	I1213 10:29:44.744305  941476 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1213 10:29:44.744312  941476 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1213 10:29:44.744317  941476 command_runner.go:130] >    GitTreeState:   dirty
	I1213 10:29:44.744322  941476 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1213 10:29:44.744327  941476 command_runner.go:130] >    GoVersion:      go1.24.6
	I1213 10:29:44.744331  941476 command_runner.go:130] >    Compiler:       gc
	I1213 10:29:44.744337  941476 command_runner.go:130] >    Platform:       linux/arm64
	I1213 10:29:44.744341  941476 command_runner.go:130] >    Linkmode:       static
	I1213 10:29:44.744346  941476 command_runner.go:130] >    BuildTags:
	I1213 10:29:44.744350  941476 command_runner.go:130] >      static
	I1213 10:29:44.744376  941476 command_runner.go:130] >      netgo
	I1213 10:29:44.744385  941476 command_runner.go:130] >      osusergo
	I1213 10:29:44.744390  941476 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1213 10:29:44.744393  941476 command_runner.go:130] >      seccomp
	I1213 10:29:44.744397  941476 command_runner.go:130] >      apparmor
	I1213 10:29:44.744406  941476 command_runner.go:130] >      selinux
	I1213 10:29:44.744411  941476 command_runner.go:130] >    LDFlags:          unknown
	I1213 10:29:44.744419  941476 command_runner.go:130] >    SeccompEnabled:   true
	I1213 10:29:44.744424  941476 command_runner.go:130] >    AppArmorEnabled:  false
	I1213 10:29:44.751529  941476 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1213 10:29:44.754410  941476 cli_runner.go:164] Run: docker network inspect functional-200955 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 10:29:44.770603  941476 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1213 10:29:44.774419  941476 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1213 10:29:44.774622  941476 kubeadm.go:884] updating cluster {Name:functional-200955 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-200955 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 10:29:44.774752  941476 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 10:29:44.774840  941476 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 10:29:44.811833  941476 command_runner.go:130] > {
	I1213 10:29:44.811851  941476 command_runner.go:130] >   "images":  [
	I1213 10:29:44.811855  941476 command_runner.go:130] >     {
	I1213 10:29:44.811864  941476 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1213 10:29:44.811869  941476 command_runner.go:130] >       "repoTags":  [
	I1213 10:29:44.811875  941476 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1213 10:29:44.811879  941476 command_runner.go:130] >       ],
	I1213 10:29:44.811883  941476 command_runner.go:130] >       "repoDigests":  [
	I1213 10:29:44.811892  941476 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1213 10:29:44.811900  941476 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1213 10:29:44.811904  941476 command_runner.go:130] >       ],
	I1213 10:29:44.811908  941476 command_runner.go:130] >       "size":  "111333938",
	I1213 10:29:44.811912  941476 command_runner.go:130] >       "username":  "",
	I1213 10:29:44.811920  941476 command_runner.go:130] >       "pinned":  false
	I1213 10:29:44.811923  941476 command_runner.go:130] >     },
	I1213 10:29:44.811927  941476 command_runner.go:130] >     {
	I1213 10:29:44.811933  941476 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1213 10:29:44.811938  941476 command_runner.go:130] >       "repoTags":  [
	I1213 10:29:44.811944  941476 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1213 10:29:44.811947  941476 command_runner.go:130] >       ],
	I1213 10:29:44.811951  941476 command_runner.go:130] >       "repoDigests":  [
	I1213 10:29:44.811959  941476 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1213 10:29:44.811968  941476 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1213 10:29:44.811980  941476 command_runner.go:130] >       ],
	I1213 10:29:44.811984  941476 command_runner.go:130] >       "size":  "29037500",
	I1213 10:29:44.811988  941476 command_runner.go:130] >       "username":  "",
	I1213 10:29:44.811994  941476 command_runner.go:130] >       "pinned":  false
	I1213 10:29:44.811997  941476 command_runner.go:130] >     },
	I1213 10:29:44.812000  941476 command_runner.go:130] >     {
	I1213 10:29:44.812007  941476 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1213 10:29:44.812011  941476 command_runner.go:130] >       "repoTags":  [
	I1213 10:29:44.812017  941476 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1213 10:29:44.812020  941476 command_runner.go:130] >       ],
	I1213 10:29:44.812024  941476 command_runner.go:130] >       "repoDigests":  [
	I1213 10:29:44.812032  941476 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1213 10:29:44.812040  941476 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1213 10:29:44.812047  941476 command_runner.go:130] >       ],
	I1213 10:29:44.812051  941476 command_runner.go:130] >       "size":  "74491780",
	I1213 10:29:44.812056  941476 command_runner.go:130] >       "username":  "nonroot",
	I1213 10:29:44.812059  941476 command_runner.go:130] >       "pinned":  false
	I1213 10:29:44.812062  941476 command_runner.go:130] >     },
	I1213 10:29:44.812066  941476 command_runner.go:130] >     {
	I1213 10:29:44.812073  941476 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1213 10:29:44.812076  941476 command_runner.go:130] >       "repoTags":  [
	I1213 10:29:44.812081  941476 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1213 10:29:44.812085  941476 command_runner.go:130] >       ],
	I1213 10:29:44.812089  941476 command_runner.go:130] >       "repoDigests":  [
	I1213 10:29:44.812097  941476 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1213 10:29:44.812104  941476 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1213 10:29:44.812109  941476 command_runner.go:130] >       ],
	I1213 10:29:44.812113  941476 command_runner.go:130] >       "size":  "60857170",
	I1213 10:29:44.812116  941476 command_runner.go:130] >       "uid":  {
	I1213 10:29:44.812120  941476 command_runner.go:130] >         "value":  "0"
	I1213 10:29:44.812123  941476 command_runner.go:130] >       },
	I1213 10:29:44.812132  941476 command_runner.go:130] >       "username":  "",
	I1213 10:29:44.812136  941476 command_runner.go:130] >       "pinned":  false
	I1213 10:29:44.812143  941476 command_runner.go:130] >     },
	I1213 10:29:44.812146  941476 command_runner.go:130] >     {
	I1213 10:29:44.812152  941476 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1213 10:29:44.812156  941476 command_runner.go:130] >       "repoTags":  [
	I1213 10:29:44.812161  941476 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1213 10:29:44.812164  941476 command_runner.go:130] >       ],
	I1213 10:29:44.812168  941476 command_runner.go:130] >       "repoDigests":  [
	I1213 10:29:44.812176  941476 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1213 10:29:44.812184  941476 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1213 10:29:44.812187  941476 command_runner.go:130] >       ],
	I1213 10:29:44.812191  941476 command_runner.go:130] >       "size":  "84949999",
	I1213 10:29:44.812195  941476 command_runner.go:130] >       "uid":  {
	I1213 10:29:44.812198  941476 command_runner.go:130] >         "value":  "0"
	I1213 10:29:44.812201  941476 command_runner.go:130] >       },
	I1213 10:29:44.812204  941476 command_runner.go:130] >       "username":  "",
	I1213 10:29:44.812208  941476 command_runner.go:130] >       "pinned":  false
	I1213 10:29:44.812211  941476 command_runner.go:130] >     },
	I1213 10:29:44.812213  941476 command_runner.go:130] >     {
	I1213 10:29:44.812220  941476 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1213 10:29:44.812224  941476 command_runner.go:130] >       "repoTags":  [
	I1213 10:29:44.812230  941476 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1213 10:29:44.812233  941476 command_runner.go:130] >       ],
	I1213 10:29:44.812236  941476 command_runner.go:130] >       "repoDigests":  [
	I1213 10:29:44.812244  941476 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1213 10:29:44.812253  941476 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1213 10:29:44.812256  941476 command_runner.go:130] >       ],
	I1213 10:29:44.812259  941476 command_runner.go:130] >       "size":  "72170325",
	I1213 10:29:44.812263  941476 command_runner.go:130] >       "uid":  {
	I1213 10:29:44.812266  941476 command_runner.go:130] >         "value":  "0"
	I1213 10:29:44.812269  941476 command_runner.go:130] >       },
	I1213 10:29:44.812273  941476 command_runner.go:130] >       "username":  "",
	I1213 10:29:44.812277  941476 command_runner.go:130] >       "pinned":  false
	I1213 10:29:44.812280  941476 command_runner.go:130] >     },
	I1213 10:29:44.812286  941476 command_runner.go:130] >     {
	I1213 10:29:44.812293  941476 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1213 10:29:44.812296  941476 command_runner.go:130] >       "repoTags":  [
	I1213 10:29:44.812302  941476 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1213 10:29:44.812304  941476 command_runner.go:130] >       ],
	I1213 10:29:44.812308  941476 command_runner.go:130] >       "repoDigests":  [
	I1213 10:29:44.812316  941476 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1213 10:29:44.812323  941476 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1213 10:29:44.812326  941476 command_runner.go:130] >       ],
	I1213 10:29:44.812330  941476 command_runner.go:130] >       "size":  "74106775",
	I1213 10:29:44.812334  941476 command_runner.go:130] >       "username":  "",
	I1213 10:29:44.812337  941476 command_runner.go:130] >       "pinned":  false
	I1213 10:29:44.812340  941476 command_runner.go:130] >     },
	I1213 10:29:44.812343  941476 command_runner.go:130] >     {
	I1213 10:29:44.812349  941476 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1213 10:29:44.812353  941476 command_runner.go:130] >       "repoTags":  [
	I1213 10:29:44.812358  941476 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1213 10:29:44.812361  941476 command_runner.go:130] >       ],
	I1213 10:29:44.812364  941476 command_runner.go:130] >       "repoDigests":  [
	I1213 10:29:44.812372  941476 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1213 10:29:44.812390  941476 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1213 10:29:44.812393  941476 command_runner.go:130] >       ],
	I1213 10:29:44.812397  941476 command_runner.go:130] >       "size":  "49822549",
	I1213 10:29:44.812400  941476 command_runner.go:130] >       "uid":  {
	I1213 10:29:44.812405  941476 command_runner.go:130] >         "value":  "0"
	I1213 10:29:44.812408  941476 command_runner.go:130] >       },
	I1213 10:29:44.812412  941476 command_runner.go:130] >       "username":  "",
	I1213 10:29:44.812416  941476 command_runner.go:130] >       "pinned":  false
	I1213 10:29:44.812419  941476 command_runner.go:130] >     },
	I1213 10:29:44.812422  941476 command_runner.go:130] >     {
	I1213 10:29:44.812428  941476 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1213 10:29:44.812432  941476 command_runner.go:130] >       "repoTags":  [
	I1213 10:29:44.812436  941476 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1213 10:29:44.812442  941476 command_runner.go:130] >       ],
	I1213 10:29:44.812446  941476 command_runner.go:130] >       "repoDigests":  [
	I1213 10:29:44.812454  941476 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1213 10:29:44.812462  941476 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1213 10:29:44.812464  941476 command_runner.go:130] >       ],
	I1213 10:29:44.812468  941476 command_runner.go:130] >       "size":  "519884",
	I1213 10:29:44.812471  941476 command_runner.go:130] >       "uid":  {
	I1213 10:29:44.812475  941476 command_runner.go:130] >         "value":  "65535"
	I1213 10:29:44.812478  941476 command_runner.go:130] >       },
	I1213 10:29:44.812482  941476 command_runner.go:130] >       "username":  "",
	I1213 10:29:44.812485  941476 command_runner.go:130] >       "pinned":  true
	I1213 10:29:44.812488  941476 command_runner.go:130] >     }
	I1213 10:29:44.812491  941476 command_runner.go:130] >   ]
	I1213 10:29:44.812494  941476 command_runner.go:130] > }
	I1213 10:29:44.812656  941476 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 10:29:44.812664  941476 crio.go:433] Images already preloaded, skipping extraction
	I1213 10:29:44.812720  941476 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 10:29:44.834840  941476 command_runner.go:130] > {
	I1213 10:29:44.834859  941476 command_runner.go:130] >   "images":  [
	I1213 10:29:44.834863  941476 command_runner.go:130] >     {
	I1213 10:29:44.834871  941476 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1213 10:29:44.834878  941476 command_runner.go:130] >       "repoTags":  [
	I1213 10:29:44.834893  941476 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1213 10:29:44.834897  941476 command_runner.go:130] >       ],
	I1213 10:29:44.834903  941476 command_runner.go:130] >       "repoDigests":  [
	I1213 10:29:44.834913  941476 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1213 10:29:44.834921  941476 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1213 10:29:44.834924  941476 command_runner.go:130] >       ],
	I1213 10:29:44.834928  941476 command_runner.go:130] >       "size":  "111333938",
	I1213 10:29:44.834932  941476 command_runner.go:130] >       "username":  "",
	I1213 10:29:44.834941  941476 command_runner.go:130] >       "pinned":  false
	I1213 10:29:44.834944  941476 command_runner.go:130] >     },
	I1213 10:29:44.834947  941476 command_runner.go:130] >     {
	I1213 10:29:44.834953  941476 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1213 10:29:44.834957  941476 command_runner.go:130] >       "repoTags":  [
	I1213 10:29:44.834962  941476 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1213 10:29:44.834965  941476 command_runner.go:130] >       ],
	I1213 10:29:44.834969  941476 command_runner.go:130] >       "repoDigests":  [
	I1213 10:29:44.834977  941476 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1213 10:29:44.834986  941476 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1213 10:29:44.834989  941476 command_runner.go:130] >       ],
	I1213 10:29:44.834993  941476 command_runner.go:130] >       "size":  "29037500",
	I1213 10:29:44.834997  941476 command_runner.go:130] >       "username":  "",
	I1213 10:29:44.835006  941476 command_runner.go:130] >       "pinned":  false
	I1213 10:29:44.835009  941476 command_runner.go:130] >     },
	I1213 10:29:44.835013  941476 command_runner.go:130] >     {
	I1213 10:29:44.835019  941476 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1213 10:29:44.835023  941476 command_runner.go:130] >       "repoTags":  [
	I1213 10:29:44.835028  941476 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1213 10:29:44.835032  941476 command_runner.go:130] >       ],
	I1213 10:29:44.835036  941476 command_runner.go:130] >       "repoDigests":  [
	I1213 10:29:44.835044  941476 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1213 10:29:44.835052  941476 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1213 10:29:44.835055  941476 command_runner.go:130] >       ],
	I1213 10:29:44.835058  941476 command_runner.go:130] >       "size":  "74491780",
	I1213 10:29:44.835062  941476 command_runner.go:130] >       "username":  "nonroot",
	I1213 10:29:44.835066  941476 command_runner.go:130] >       "pinned":  false
	I1213 10:29:44.835069  941476 command_runner.go:130] >     },
	I1213 10:29:44.835073  941476 command_runner.go:130] >     {
	I1213 10:29:44.835080  941476 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1213 10:29:44.835083  941476 command_runner.go:130] >       "repoTags":  [
	I1213 10:29:44.835088  941476 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1213 10:29:44.835093  941476 command_runner.go:130] >       ],
	I1213 10:29:44.835100  941476 command_runner.go:130] >       "repoDigests":  [
	I1213 10:29:44.835108  941476 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1213 10:29:44.835116  941476 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1213 10:29:44.835119  941476 command_runner.go:130] >       ],
	I1213 10:29:44.835123  941476 command_runner.go:130] >       "size":  "60857170",
	I1213 10:29:44.835127  941476 command_runner.go:130] >       "uid":  {
	I1213 10:29:44.835131  941476 command_runner.go:130] >         "value":  "0"
	I1213 10:29:44.835134  941476 command_runner.go:130] >       },
	I1213 10:29:44.835147  941476 command_runner.go:130] >       "username":  "",
	I1213 10:29:44.835151  941476 command_runner.go:130] >       "pinned":  false
	I1213 10:29:44.835154  941476 command_runner.go:130] >     },
	I1213 10:29:44.835157  941476 command_runner.go:130] >     {
	I1213 10:29:44.835163  941476 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1213 10:29:44.835167  941476 command_runner.go:130] >       "repoTags":  [
	I1213 10:29:44.835172  941476 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1213 10:29:44.835175  941476 command_runner.go:130] >       ],
	I1213 10:29:44.835179  941476 command_runner.go:130] >       "repoDigests":  [
	I1213 10:29:44.835187  941476 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1213 10:29:44.835195  941476 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1213 10:29:44.835197  941476 command_runner.go:130] >       ],
	I1213 10:29:44.835201  941476 command_runner.go:130] >       "size":  "84949999",
	I1213 10:29:44.835205  941476 command_runner.go:130] >       "uid":  {
	I1213 10:29:44.835209  941476 command_runner.go:130] >         "value":  "0"
	I1213 10:29:44.835212  941476 command_runner.go:130] >       },
	I1213 10:29:44.835215  941476 command_runner.go:130] >       "username":  "",
	I1213 10:29:44.835219  941476 command_runner.go:130] >       "pinned":  false
	I1213 10:29:44.835222  941476 command_runner.go:130] >     },
	I1213 10:29:44.835224  941476 command_runner.go:130] >     {
	I1213 10:29:44.835231  941476 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1213 10:29:44.835234  941476 command_runner.go:130] >       "repoTags":  [
	I1213 10:29:44.835240  941476 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1213 10:29:44.835243  941476 command_runner.go:130] >       ],
	I1213 10:29:44.835247  941476 command_runner.go:130] >       "repoDigests":  [
	I1213 10:29:44.835261  941476 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1213 10:29:44.835270  941476 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1213 10:29:44.835273  941476 command_runner.go:130] >       ],
	I1213 10:29:44.835277  941476 command_runner.go:130] >       "size":  "72170325",
	I1213 10:29:44.835281  941476 command_runner.go:130] >       "uid":  {
	I1213 10:29:44.835285  941476 command_runner.go:130] >         "value":  "0"
	I1213 10:29:44.835288  941476 command_runner.go:130] >       },
	I1213 10:29:44.835292  941476 command_runner.go:130] >       "username":  "",
	I1213 10:29:44.835295  941476 command_runner.go:130] >       "pinned":  false
	I1213 10:29:44.835298  941476 command_runner.go:130] >     },
	I1213 10:29:44.835302  941476 command_runner.go:130] >     {
	I1213 10:29:44.835309  941476 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1213 10:29:44.835312  941476 command_runner.go:130] >       "repoTags":  [
	I1213 10:29:44.835318  941476 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1213 10:29:44.835320  941476 command_runner.go:130] >       ],
	I1213 10:29:44.835324  941476 command_runner.go:130] >       "repoDigests":  [
	I1213 10:29:44.835332  941476 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1213 10:29:44.835340  941476 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1213 10:29:44.835343  941476 command_runner.go:130] >       ],
	I1213 10:29:44.835347  941476 command_runner.go:130] >       "size":  "74106775",
	I1213 10:29:44.835351  941476 command_runner.go:130] >       "username":  "",
	I1213 10:29:44.835355  941476 command_runner.go:130] >       "pinned":  false
	I1213 10:29:44.835358  941476 command_runner.go:130] >     },
	I1213 10:29:44.835361  941476 command_runner.go:130] >     {
	I1213 10:29:44.835367  941476 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1213 10:29:44.835371  941476 command_runner.go:130] >       "repoTags":  [
	I1213 10:29:44.835376  941476 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1213 10:29:44.835379  941476 command_runner.go:130] >       ],
	I1213 10:29:44.835383  941476 command_runner.go:130] >       "repoDigests":  [
	I1213 10:29:44.835390  941476 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1213 10:29:44.835407  941476 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1213 10:29:44.835411  941476 command_runner.go:130] >       ],
	I1213 10:29:44.835415  941476 command_runner.go:130] >       "size":  "49822549",
	I1213 10:29:44.835422  941476 command_runner.go:130] >       "uid":  {
	I1213 10:29:44.835426  941476 command_runner.go:130] >         "value":  "0"
	I1213 10:29:44.835429  941476 command_runner.go:130] >       },
	I1213 10:29:44.835433  941476 command_runner.go:130] >       "username":  "",
	I1213 10:29:44.835436  941476 command_runner.go:130] >       "pinned":  false
	I1213 10:29:44.835439  941476 command_runner.go:130] >     },
	I1213 10:29:44.835442  941476 command_runner.go:130] >     {
	I1213 10:29:44.835449  941476 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1213 10:29:44.835452  941476 command_runner.go:130] >       "repoTags":  [
	I1213 10:29:44.835457  941476 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1213 10:29:44.835460  941476 command_runner.go:130] >       ],
	I1213 10:29:44.835463  941476 command_runner.go:130] >       "repoDigests":  [
	I1213 10:29:44.835470  941476 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1213 10:29:44.835478  941476 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1213 10:29:44.835481  941476 command_runner.go:130] >       ],
	I1213 10:29:44.835485  941476 command_runner.go:130] >       "size":  "519884",
	I1213 10:29:44.835489  941476 command_runner.go:130] >       "uid":  {
	I1213 10:29:44.835492  941476 command_runner.go:130] >         "value":  "65535"
	I1213 10:29:44.835495  941476 command_runner.go:130] >       },
	I1213 10:29:44.835499  941476 command_runner.go:130] >       "username":  "",
	I1213 10:29:44.835503  941476 command_runner.go:130] >       "pinned":  true
	I1213 10:29:44.835506  941476 command_runner.go:130] >     }
	I1213 10:29:44.835508  941476 command_runner.go:130] >   ]
	I1213 10:29:44.835512  941476 command_runner.go:130] > }
	I1213 10:29:44.838144  941476 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 10:29:44.838206  941476 cache_images.go:86] Images are preloaded, skipping loading
	I1213 10:29:44.838219  941476 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1213 10:29:44.838324  941476 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-200955 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-200955 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 10:29:44.838426  941476 ssh_runner.go:195] Run: crio config
	I1213 10:29:44.886075  941476 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1213 10:29:44.886098  941476 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1213 10:29:44.886106  941476 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1213 10:29:44.886110  941476 command_runner.go:130] > #
	I1213 10:29:44.886117  941476 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1213 10:29:44.886124  941476 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1213 10:29:44.886130  941476 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1213 10:29:44.886139  941476 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1213 10:29:44.886142  941476 command_runner.go:130] > # reload'.
	I1213 10:29:44.886162  941476 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1213 10:29:44.886169  941476 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1213 10:29:44.886175  941476 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1213 10:29:44.886181  941476 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1213 10:29:44.886184  941476 command_runner.go:130] > [crio]
	I1213 10:29:44.886190  941476 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1213 10:29:44.886195  941476 command_runner.go:130] > # containers images, in this directory.
	I1213 10:29:44.886932  941476 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1213 10:29:44.886948  941476 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1213 10:29:44.887520  941476 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1213 10:29:44.887536  941476 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1213 10:29:44.887990  941476 command_runner.go:130] > # imagestore = ""
	I1213 10:29:44.888002  941476 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1213 10:29:44.888019  941476 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1213 10:29:44.888390  941476 command_runner.go:130] > # storage_driver = "overlay"
	I1213 10:29:44.888402  941476 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1213 10:29:44.888409  941476 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1213 10:29:44.888578  941476 command_runner.go:130] > # storage_option = [
	I1213 10:29:44.888743  941476 command_runner.go:130] > # ]
	I1213 10:29:44.888754  941476 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1213 10:29:44.888761  941476 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1213 10:29:44.888765  941476 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1213 10:29:44.888771  941476 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1213 10:29:44.888787  941476 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1213 10:29:44.888792  941476 command_runner.go:130] > # always happen on a node reboot
	I1213 10:29:44.888797  941476 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1213 10:29:44.888807  941476 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1213 10:29:44.888813  941476 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1213 10:29:44.888818  941476 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1213 10:29:44.888822  941476 command_runner.go:130] > # version_file_persist = ""
	I1213 10:29:44.888829  941476 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1213 10:29:44.888839  941476 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1213 10:29:44.888843  941476 command_runner.go:130] > # internal_wipe = true
	I1213 10:29:44.888851  941476 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1213 10:29:44.888856  941476 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1213 10:29:44.888860  941476 command_runner.go:130] > # internal_repair = true
	I1213 10:29:44.888869  941476 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1213 10:29:44.888875  941476 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1213 10:29:44.888881  941476 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1213 10:29:44.888886  941476 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1213 10:29:44.888892  941476 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1213 10:29:44.888895  941476 command_runner.go:130] > [crio.api]
	I1213 10:29:44.888901  941476 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1213 10:29:44.888905  941476 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1213 10:29:44.888910  941476 command_runner.go:130] > # IP address on which the stream server will listen.
	I1213 10:29:44.888914  941476 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1213 10:29:44.888921  941476 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1213 10:29:44.888926  941476 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1213 10:29:44.888929  941476 command_runner.go:130] > # stream_port = "0"
	I1213 10:29:44.888934  941476 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1213 10:29:44.888938  941476 command_runner.go:130] > # stream_enable_tls = false
	I1213 10:29:44.888944  941476 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1213 10:29:44.889110  941476 command_runner.go:130] > # stream_idle_timeout = ""
	I1213 10:29:44.889121  941476 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1213 10:29:44.889127  941476 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1213 10:29:44.889131  941476 command_runner.go:130] > # stream_tls_cert = ""
	I1213 10:29:44.889137  941476 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1213 10:29:44.889143  941476 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1213 10:29:44.889156  941476 command_runner.go:130] > # stream_tls_key = ""
	I1213 10:29:44.889162  941476 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1213 10:29:44.889169  941476 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1213 10:29:44.889177  941476 command_runner.go:130] > # automatically pick up the changes.
	I1213 10:29:44.889181  941476 command_runner.go:130] > # stream_tls_ca = ""
	I1213 10:29:44.889197  941476 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1213 10:29:44.889202  941476 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1213 10:29:44.889209  941476 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1213 10:29:44.889214  941476 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1213 10:29:44.889220  941476 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1213 10:29:44.889225  941476 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1213 10:29:44.889229  941476 command_runner.go:130] > [crio.runtime]
	I1213 10:29:44.889235  941476 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1213 10:29:44.889240  941476 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1213 10:29:44.889244  941476 command_runner.go:130] > # "nofile=1024:2048"
	I1213 10:29:44.889253  941476 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1213 10:29:44.889257  941476 command_runner.go:130] > # default_ulimits = [
	I1213 10:29:44.889260  941476 command_runner.go:130] > # ]
	I1213 10:29:44.889265  941476 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1213 10:29:44.889269  941476 command_runner.go:130] > # no_pivot = false
	I1213 10:29:44.889274  941476 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1213 10:29:44.889280  941476 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1213 10:29:44.889285  941476 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1213 10:29:44.889291  941476 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1213 10:29:44.889296  941476 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1213 10:29:44.889318  941476 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1213 10:29:44.889322  941476 command_runner.go:130] > # conmon = ""
	I1213 10:29:44.889327  941476 command_runner.go:130] > # Cgroup setting for conmon
	I1213 10:29:44.889333  941476 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1213 10:29:44.889512  941476 command_runner.go:130] > conmon_cgroup = "pod"
	I1213 10:29:44.889563  941476 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1213 10:29:44.889585  941476 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1213 10:29:44.889610  941476 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1213 10:29:44.889647  941476 command_runner.go:130] > # conmon_env = [
	I1213 10:29:44.889671  941476 command_runner.go:130] > # ]
	I1213 10:29:44.889696  941476 command_runner.go:130] > # Additional environment variables to set for all the
	I1213 10:29:44.889721  941476 command_runner.go:130] > # containers. These are overridden if set in the
	I1213 10:29:44.889753  941476 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1213 10:29:44.889776  941476 command_runner.go:130] > # default_env = [
	I1213 10:29:44.889797  941476 command_runner.go:130] > # ]
	I1213 10:29:44.889822  941476 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1213 10:29:44.889858  941476 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1213 10:29:44.889885  941476 command_runner.go:130] > # selinux = false
	I1213 10:29:44.889906  941476 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1213 10:29:44.889932  941476 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1213 10:29:44.889962  941476 command_runner.go:130] > # This option supports live configuration reload.
	I1213 10:29:44.889985  941476 command_runner.go:130] > # seccomp_profile = ""
	I1213 10:29:44.890009  941476 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1213 10:29:44.890029  941476 command_runner.go:130] > # This option supports live configuration reload.
	I1213 10:29:44.890061  941476 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1213 10:29:44.890087  941476 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1213 10:29:44.890109  941476 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1213 10:29:44.890133  941476 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1213 10:29:44.890166  941476 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1213 10:29:44.890191  941476 command_runner.go:130] > # This option supports live configuration reload.
	I1213 10:29:44.890212  941476 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1213 10:29:44.890236  941476 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1213 10:29:44.890284  941476 command_runner.go:130] > # the cgroup blockio controller.
	I1213 10:29:44.890307  941476 command_runner.go:130] > # blockio_config_file = ""
	I1213 10:29:44.890329  941476 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1213 10:29:44.890350  941476 command_runner.go:130] > # blockio parameters.
	I1213 10:29:44.890409  941476 command_runner.go:130] > # blockio_reload = false
	I1213 10:29:44.890437  941476 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1213 10:29:44.890458  941476 command_runner.go:130] > # irqbalance daemon.
	I1213 10:29:44.890483  941476 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1213 10:29:44.890515  941476 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1213 10:29:44.890551  941476 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1213 10:29:44.890575  941476 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1213 10:29:44.890599  941476 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1213 10:29:44.890631  941476 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1213 10:29:44.890655  941476 command_runner.go:130] > # This option supports live configuration reload.
	I1213 10:29:44.890676  941476 command_runner.go:130] > # rdt_config_file = ""
	I1213 10:29:44.890716  941476 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1213 10:29:44.890743  941476 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1213 10:29:44.890767  941476 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1213 10:29:44.890788  941476 command_runner.go:130] > # separate_pull_cgroup = ""
	I1213 10:29:44.890824  941476 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1213 10:29:44.890863  941476 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1213 10:29:44.890886  941476 command_runner.go:130] > # will be added.
	I1213 10:29:44.890904  941476 command_runner.go:130] > # default_capabilities = [
	I1213 10:29:44.890932  941476 command_runner.go:130] > # 	"CHOWN",
	I1213 10:29:44.890957  941476 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1213 10:29:44.891256  941476 command_runner.go:130] > # 	"FSETID",
	I1213 10:29:44.891291  941476 command_runner.go:130] > # 	"FOWNER",
	I1213 10:29:44.891318  941476 command_runner.go:130] > # 	"SETGID",
	I1213 10:29:44.891335  941476 command_runner.go:130] > # 	"SETUID",
	I1213 10:29:44.891390  941476 command_runner.go:130] > # 	"SETPCAP",
	I1213 10:29:44.891416  941476 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1213 10:29:44.891438  941476 command_runner.go:130] > # 	"KILL",
	I1213 10:29:44.891461  941476 command_runner.go:130] > # ]
	I1213 10:29:44.891498  941476 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1213 10:29:44.891527  941476 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1213 10:29:44.891550  941476 command_runner.go:130] > # add_inheritable_capabilities = false
	I1213 10:29:44.891572  941476 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1213 10:29:44.891606  941476 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1213 10:29:44.891629  941476 command_runner.go:130] > default_sysctls = [
	I1213 10:29:44.891651  941476 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1213 10:29:44.891671  941476 command_runner.go:130] > ]
	I1213 10:29:44.891705  941476 command_runner.go:130] > # List of devices on the host that a
	I1213 10:29:44.891730  941476 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1213 10:29:44.891749  941476 command_runner.go:130] > # allowed_devices = [
	I1213 10:29:44.891779  941476 command_runner.go:130] > # 	"/dev/fuse",
	I1213 10:29:44.891809  941476 command_runner.go:130] > # 	"/dev/net/tun",
	I1213 10:29:44.891834  941476 command_runner.go:130] > # ]
	I1213 10:29:44.891856  941476 command_runner.go:130] > # List of additional devices. specified as
	I1213 10:29:44.891880  941476 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1213 10:29:44.891914  941476 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1213 10:29:44.891940  941476 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1213 10:29:44.891962  941476 command_runner.go:130] > # additional_devices = [
	I1213 10:29:44.891983  941476 command_runner.go:130] > # ]
	I1213 10:29:44.892017  941476 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1213 10:29:44.892041  941476 command_runner.go:130] > # cdi_spec_dirs = [
	I1213 10:29:44.892063  941476 command_runner.go:130] > # 	"/etc/cdi",
	I1213 10:29:44.892082  941476 command_runner.go:130] > # 	"/var/run/cdi",
	I1213 10:29:44.892103  941476 command_runner.go:130] > # ]
	I1213 10:29:44.892139  941476 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1213 10:29:44.892161  941476 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1213 10:29:44.892183  941476 command_runner.go:130] > # Defaults to false.
	I1213 10:29:44.892215  941476 command_runner.go:130] > # device_ownership_from_security_context = false
	I1213 10:29:44.892243  941476 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1213 10:29:44.892267  941476 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1213 10:29:44.892287  941476 command_runner.go:130] > # hooks_dir = [
	I1213 10:29:44.892324  941476 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1213 10:29:44.892349  941476 command_runner.go:130] > # ]
	I1213 10:29:44.892371  941476 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1213 10:29:44.892394  941476 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1213 10:29:44.892427  941476 command_runner.go:130] > # its default mounts from the following two files:
	I1213 10:29:44.892450  941476 command_runner.go:130] > #
	I1213 10:29:44.892472  941476 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1213 10:29:44.892496  941476 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1213 10:29:44.892529  941476 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1213 10:29:44.892555  941476 command_runner.go:130] > #
	I1213 10:29:44.892582  941476 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1213 10:29:44.892608  941476 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1213 10:29:44.892654  941476 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1213 10:29:44.892680  941476 command_runner.go:130] > #      only add mounts it finds in this file.
	I1213 10:29:44.892700  941476 command_runner.go:130] > #
	I1213 10:29:44.892722  941476 command_runner.go:130] > # default_mounts_file = ""
	I1213 10:29:44.892742  941476 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1213 10:29:44.892779  941476 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1213 10:29:44.892797  941476 command_runner.go:130] > # pids_limit = -1
	I1213 10:29:44.892825  941476 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1213 10:29:44.892860  941476 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1213 10:29:44.892886  941476 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1213 10:29:44.892912  941476 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1213 10:29:44.892937  941476 command_runner.go:130] > # log_size_max = -1
	I1213 10:29:44.892967  941476 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1213 10:29:44.892992  941476 command_runner.go:130] > # log_to_journald = false
	I1213 10:29:44.893016  941476 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1213 10:29:44.893040  941476 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1213 10:29:44.893073  941476 command_runner.go:130] > # Path to directory for container attach sockets.
	I1213 10:29:44.893097  941476 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1213 10:29:44.893118  941476 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1213 10:29:44.893142  941476 command_runner.go:130] > # bind_mount_prefix = ""
	I1213 10:29:44.893174  941476 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1213 10:29:44.893198  941476 command_runner.go:130] > # read_only = false
	I1213 10:29:44.893223  941476 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1213 10:29:44.893245  941476 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1213 10:29:44.893278  941476 command_runner.go:130] > # live configuration reload.
	I1213 10:29:44.893302  941476 command_runner.go:130] > # log_level = "info"
	I1213 10:29:44.893331  941476 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1213 10:29:44.893353  941476 command_runner.go:130] > # This option supports live configuration reload.
	I1213 10:29:44.893380  941476 command_runner.go:130] > # log_filter = ""
	I1213 10:29:44.893406  941476 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1213 10:29:44.893430  941476 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1213 10:29:44.893452  941476 command_runner.go:130] > # separated by comma.
	I1213 10:29:44.893486  941476 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1213 10:29:44.893520  941476 command_runner.go:130] > # uid_mappings = ""
	I1213 10:29:44.893564  941476 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1213 10:29:44.893593  941476 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1213 10:29:44.893617  941476 command_runner.go:130] > # separated by comma.
	I1213 10:29:44.893643  941476 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1213 10:29:44.893997  941476 command_runner.go:130] > # gid_mappings = ""
	I1213 10:29:44.894010  941476 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1213 10:29:44.894017  941476 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1213 10:29:44.894024  941476 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1213 10:29:44.894032  941476 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1213 10:29:44.894037  941476 command_runner.go:130] > # minimum_mappable_uid = -1
	I1213 10:29:44.894043  941476 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1213 10:29:44.894050  941476 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1213 10:29:44.894056  941476 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1213 10:29:44.894064  941476 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1213 10:29:44.894068  941476 command_runner.go:130] > # minimum_mappable_gid = -1
	I1213 10:29:44.894074  941476 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1213 10:29:44.894081  941476 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1213 10:29:44.894086  941476 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1213 10:29:44.894090  941476 command_runner.go:130] > # ctr_stop_timeout = 30
	I1213 10:29:44.894096  941476 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1213 10:29:44.894102  941476 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1213 10:29:44.894107  941476 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1213 10:29:44.894111  941476 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1213 10:29:44.894115  941476 command_runner.go:130] > # drop_infra_ctr = true
	I1213 10:29:44.894121  941476 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1213 10:29:44.894127  941476 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1213 10:29:44.894135  941476 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1213 10:29:44.894141  941476 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1213 10:29:44.894149  941476 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1213 10:29:44.894155  941476 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1213 10:29:44.894160  941476 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1213 10:29:44.894165  941476 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1213 10:29:44.894173  941476 command_runner.go:130] > # shared_cpuset = ""
	I1213 10:29:44.894179  941476 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1213 10:29:44.894184  941476 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1213 10:29:44.894188  941476 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1213 10:29:44.894195  941476 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1213 10:29:44.894199  941476 command_runner.go:130] > # pinns_path = ""
	I1213 10:29:44.894204  941476 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1213 10:29:44.894210  941476 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1213 10:29:44.894216  941476 command_runner.go:130] > # enable_criu_support = true
	I1213 10:29:44.894223  941476 command_runner.go:130] > # Enable/disable the generation of the container,
	I1213 10:29:44.894229  941476 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1213 10:29:44.894234  941476 command_runner.go:130] > # enable_pod_events = false
	I1213 10:29:44.894240  941476 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1213 10:29:44.894245  941476 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1213 10:29:44.894249  941476 command_runner.go:130] > # default_runtime = "crun"
	I1213 10:29:44.894254  941476 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1213 10:29:44.894261  941476 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1213 10:29:44.894271  941476 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1213 10:29:44.894276  941476 command_runner.go:130] > # creation as a file is not desired either.
	I1213 10:29:44.894284  941476 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1213 10:29:44.894289  941476 command_runner.go:130] > # the hostname is being managed dynamically.
	I1213 10:29:44.894293  941476 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1213 10:29:44.894297  941476 command_runner.go:130] > # ]
	I1213 10:29:44.894303  941476 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1213 10:29:44.894309  941476 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1213 10:29:44.894316  941476 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1213 10:29:44.894321  941476 command_runner.go:130] > # Each entry in the table should follow the format:
	I1213 10:29:44.894324  941476 command_runner.go:130] > #
	I1213 10:29:44.894329  941476 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1213 10:29:44.894333  941476 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1213 10:29:44.894337  941476 command_runner.go:130] > # runtime_type = "oci"
	I1213 10:29:44.894342  941476 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1213 10:29:44.894348  941476 command_runner.go:130] > # inherit_default_runtime = false
	I1213 10:29:44.894367  941476 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1213 10:29:44.894372  941476 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1213 10:29:44.894377  941476 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1213 10:29:44.894381  941476 command_runner.go:130] > # monitor_env = []
	I1213 10:29:44.894386  941476 command_runner.go:130] > # privileged_without_host_devices = false
	I1213 10:29:44.894390  941476 command_runner.go:130] > # allowed_annotations = []
	I1213 10:29:44.894395  941476 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1213 10:29:44.894399  941476 command_runner.go:130] > # no_sync_log = false
	I1213 10:29:44.894403  941476 command_runner.go:130] > # default_annotations = {}
	I1213 10:29:44.894407  941476 command_runner.go:130] > # stream_websockets = false
	I1213 10:29:44.894411  941476 command_runner.go:130] > # seccomp_profile = ""
	I1213 10:29:44.894442  941476 command_runner.go:130] > # Where:
	I1213 10:29:44.894448  941476 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1213 10:29:44.894454  941476 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1213 10:29:44.894461  941476 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1213 10:29:44.894468  941476 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1213 10:29:44.894471  941476 command_runner.go:130] > #   in $PATH.
	I1213 10:29:44.894478  941476 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1213 10:29:44.894482  941476 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1213 10:29:44.894488  941476 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1213 10:29:44.894492  941476 command_runner.go:130] > #   state.
	I1213 10:29:44.894498  941476 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1213 10:29:44.894504  941476 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1213 10:29:44.894510  941476 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1213 10:29:44.894516  941476 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1213 10:29:44.894521  941476 command_runner.go:130] > #   the values from the default runtime on load time.
	I1213 10:29:44.894527  941476 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1213 10:29:44.894533  941476 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1213 10:29:44.894539  941476 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1213 10:29:44.894545  941476 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1213 10:29:44.894550  941476 command_runner.go:130] > #   The currently recognized values are:
	I1213 10:29:44.894557  941476 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1213 10:29:44.894564  941476 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1213 10:29:44.894574  941476 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1213 10:29:44.894580  941476 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1213 10:29:44.894588  941476 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1213 10:29:44.894596  941476 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1213 10:29:44.894602  941476 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1213 10:29:44.894608  941476 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1213 10:29:44.894614  941476 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1213 10:29:44.894621  941476 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1213 10:29:44.894628  941476 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1213 10:29:44.894634  941476 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1213 10:29:44.894640  941476 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1213 10:29:44.894646  941476 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1213 10:29:44.894652  941476 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1213 10:29:44.894661  941476 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1213 10:29:44.894667  941476 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1213 10:29:44.894672  941476 command_runner.go:130] > #   deprecated option "conmon".
	I1213 10:29:44.894679  941476 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1213 10:29:44.894684  941476 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1213 10:29:44.894691  941476 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1213 10:29:44.894695  941476 command_runner.go:130] > #   should be moved to the container's cgroup
	I1213 10:29:44.894702  941476 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1213 10:29:44.894707  941476 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1213 10:29:44.894714  941476 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1213 10:29:44.894718  941476 command_runner.go:130] > #   conmon-rs by using:
	I1213 10:29:44.894726  941476 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1213 10:29:44.894734  941476 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1213 10:29:44.894742  941476 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1213 10:29:44.894748  941476 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1213 10:29:44.894753  941476 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1213 10:29:44.894760  941476 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1213 10:29:44.894768  941476 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1213 10:29:44.894774  941476 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1213 10:29:44.894782  941476 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1213 10:29:44.894794  941476 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1213 10:29:44.894798  941476 command_runner.go:130] > #   when a machine crash happens.
	I1213 10:29:44.894805  941476 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1213 10:29:44.894813  941476 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1213 10:29:44.894821  941476 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1213 10:29:44.894825  941476 command_runner.go:130] > #   seccomp profile for the runtime.
	I1213 10:29:44.894838  941476 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1213 10:29:44.894848  941476 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1213 10:29:44.894851  941476 command_runner.go:130] > #
	I1213 10:29:44.894855  941476 command_runner.go:130] > # Using the seccomp notifier feature:
	I1213 10:29:44.894859  941476 command_runner.go:130] > #
	I1213 10:29:44.894866  941476 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1213 10:29:44.894872  941476 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1213 10:29:44.894878  941476 command_runner.go:130] > #
	I1213 10:29:44.894887  941476 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1213 10:29:44.894893  941476 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1213 10:29:44.894896  941476 command_runner.go:130] > #
	I1213 10:29:44.894903  941476 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1213 10:29:44.894906  941476 command_runner.go:130] > # feature.
	I1213 10:29:44.894909  941476 command_runner.go:130] > #
	I1213 10:29:44.894914  941476 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1213 10:29:44.894921  941476 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1213 10:29:44.894927  941476 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1213 10:29:44.894933  941476 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1213 10:29:44.894939  941476 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1213 10:29:44.894942  941476 command_runner.go:130] > #
	I1213 10:29:44.894948  941476 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1213 10:29:44.894954  941476 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1213 10:29:44.894957  941476 command_runner.go:130] > #
	I1213 10:29:44.894963  941476 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1213 10:29:44.894968  941476 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1213 10:29:44.894971  941476 command_runner.go:130] > #
	I1213 10:29:44.894977  941476 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1213 10:29:44.894987  941476 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1213 10:29:44.894991  941476 command_runner.go:130] > # limitation.
	I1213 10:29:44.894995  941476 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1213 10:29:44.895000  941476 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1213 10:29:44.895004  941476 command_runner.go:130] > runtime_type = ""
	I1213 10:29:44.895008  941476 command_runner.go:130] > runtime_root = "/run/crun"
	I1213 10:29:44.895013  941476 command_runner.go:130] > inherit_default_runtime = false
	I1213 10:29:44.895016  941476 command_runner.go:130] > runtime_config_path = ""
	I1213 10:29:44.895020  941476 command_runner.go:130] > container_min_memory = ""
	I1213 10:29:44.895025  941476 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1213 10:29:44.895028  941476 command_runner.go:130] > monitor_cgroup = "pod"
	I1213 10:29:44.895032  941476 command_runner.go:130] > monitor_exec_cgroup = ""
	I1213 10:29:44.895036  941476 command_runner.go:130] > allowed_annotations = [
	I1213 10:29:44.895040  941476 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1213 10:29:44.895043  941476 command_runner.go:130] > ]
	I1213 10:29:44.895047  941476 command_runner.go:130] > privileged_without_host_devices = false
	I1213 10:29:44.895051  941476 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1213 10:29:44.895056  941476 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1213 10:29:44.895059  941476 command_runner.go:130] > runtime_type = ""
	I1213 10:29:44.895064  941476 command_runner.go:130] > runtime_root = "/run/runc"
	I1213 10:29:44.895069  941476 command_runner.go:130] > inherit_default_runtime = false
	I1213 10:29:44.895072  941476 command_runner.go:130] > runtime_config_path = ""
	I1213 10:29:44.895076  941476 command_runner.go:130] > container_min_memory = ""
	I1213 10:29:44.895081  941476 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1213 10:29:44.895084  941476 command_runner.go:130] > monitor_cgroup = "pod"
	I1213 10:29:44.895089  941476 command_runner.go:130] > monitor_exec_cgroup = ""
	I1213 10:29:44.895093  941476 command_runner.go:130] > privileged_without_host_devices = false
	I1213 10:29:44.895100  941476 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1213 10:29:44.895105  941476 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1213 10:29:44.895111  941476 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1213 10:29:44.895119  941476 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1213 10:29:44.895129  941476 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1213 10:29:44.895139  941476 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1213 10:29:44.895151  941476 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1213 10:29:44.895156  941476 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1213 10:29:44.895166  941476 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1213 10:29:44.895174  941476 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1213 10:29:44.895181  941476 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1213 10:29:44.895188  941476 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1213 10:29:44.895191  941476 command_runner.go:130] > # Example:
	I1213 10:29:44.895196  941476 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1213 10:29:44.895201  941476 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1213 10:29:44.895207  941476 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1213 10:29:44.895212  941476 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1213 10:29:44.895216  941476 command_runner.go:130] > # cpuset = "0-1"
	I1213 10:29:44.895219  941476 command_runner.go:130] > # cpushares = "5"
	I1213 10:29:44.895223  941476 command_runner.go:130] > # cpuquota = "1000"
	I1213 10:29:44.895227  941476 command_runner.go:130] > # cpuperiod = "100000"
	I1213 10:29:44.895230  941476 command_runner.go:130] > # cpulimit = "35"
	I1213 10:29:44.895234  941476 command_runner.go:130] > # Where:
	I1213 10:29:44.895238  941476 command_runner.go:130] > # The workload name is workload-type.
	I1213 10:29:44.895245  941476 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1213 10:29:44.895250  941476 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1213 10:29:44.895259  941476 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1213 10:29:44.895267  941476 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1213 10:29:44.895274  941476 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1213 10:29:44.895279  941476 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1213 10:29:44.895286  941476 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1213 10:29:44.895290  941476 command_runner.go:130] > # Default value is set to true
	I1213 10:29:44.895294  941476 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1213 10:29:44.895300  941476 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1213 10:29:44.895305  941476 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1213 10:29:44.895309  941476 command_runner.go:130] > # Default value is set to 'false'
	I1213 10:29:44.895313  941476 command_runner.go:130] > # disable_hostport_mapping = false
	I1213 10:29:44.895318  941476 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1213 10:29:44.895326  941476 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1213 10:29:44.895334  941476 command_runner.go:130] > # timezone = ""
	I1213 10:29:44.895341  941476 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1213 10:29:44.895343  941476 command_runner.go:130] > #
	I1213 10:29:44.895349  941476 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1213 10:29:44.895355  941476 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1213 10:29:44.895358  941476 command_runner.go:130] > [crio.image]
	I1213 10:29:44.895364  941476 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1213 10:29:44.895368  941476 command_runner.go:130] > # default_transport = "docker://"
	I1213 10:29:44.895373  941476 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1213 10:29:44.895380  941476 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1213 10:29:44.895383  941476 command_runner.go:130] > # global_auth_file = ""
	I1213 10:29:44.895388  941476 command_runner.go:130] > # The image used to instantiate infra containers.
	I1213 10:29:44.895393  941476 command_runner.go:130] > # This option supports live configuration reload.
	I1213 10:29:44.895398  941476 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1213 10:29:44.895404  941476 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1213 10:29:44.895412  941476 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1213 10:29:44.895417  941476 command_runner.go:130] > # This option supports live configuration reload.
	I1213 10:29:44.895420  941476 command_runner.go:130] > # pause_image_auth_file = ""
	I1213 10:29:44.895426  941476 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1213 10:29:44.895432  941476 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1213 10:29:44.895438  941476 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1213 10:29:44.895444  941476 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1213 10:29:44.895448  941476 command_runner.go:130] > # pause_command = "/pause"
	I1213 10:29:44.895454  941476 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1213 10:29:44.895460  941476 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1213 10:29:44.895467  941476 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1213 10:29:44.895473  941476 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1213 10:29:44.895479  941476 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1213 10:29:44.895485  941476 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1213 10:29:44.895488  941476 command_runner.go:130] > # pinned_images = [
	I1213 10:29:44.895491  941476 command_runner.go:130] > # ]
	I1213 10:29:44.895497  941476 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1213 10:29:44.895503  941476 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1213 10:29:44.895512  941476 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1213 10:29:44.895519  941476 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1213 10:29:44.895524  941476 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1213 10:29:44.895529  941476 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1213 10:29:44.895534  941476 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1213 10:29:44.895540  941476 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1213 10:29:44.895547  941476 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1213 10:29:44.895554  941476 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1213 10:29:44.895559  941476 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1213 10:29:44.895564  941476 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1213 10:29:44.895570  941476 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1213 10:29:44.895576  941476 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1213 10:29:44.895580  941476 command_runner.go:130] > # changing them here.
	I1213 10:29:44.895586  941476 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1213 10:29:44.895590  941476 command_runner.go:130] > # insecure_registries = [
	I1213 10:29:44.895592  941476 command_runner.go:130] > # ]
	I1213 10:29:44.895598  941476 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1213 10:29:44.895603  941476 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1213 10:29:44.895609  941476 command_runner.go:130] > # image_volumes = "mkdir"
	I1213 10:29:44.895614  941476 command_runner.go:130] > # Temporary directory to use for storing big files
	I1213 10:29:44.895618  941476 command_runner.go:130] > # big_files_temporary_dir = ""
	I1213 10:29:44.895623  941476 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1213 10:29:44.895630  941476 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1213 10:29:44.895634  941476 command_runner.go:130] > # auto_reload_registries = false
	I1213 10:29:44.895641  941476 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1213 10:29:44.895651  941476 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1213 10:29:44.895657  941476 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1213 10:29:44.895662  941476 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1213 10:29:44.895666  941476 command_runner.go:130] > # The mode of short name resolution.
	I1213 10:29:44.895672  941476 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1213 10:29:44.895679  941476 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1213 10:29:44.895684  941476 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1213 10:29:44.895688  941476 command_runner.go:130] > # short_name_mode = "enforcing"
	I1213 10:29:44.895697  941476 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1213 10:29:44.895704  941476 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1213 10:29:44.895708  941476 command_runner.go:130] > # oci_artifact_mount_support = true
	I1213 10:29:44.895715  941476 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1213 10:29:44.895718  941476 command_runner.go:130] > # CNI plugins.
	I1213 10:29:44.895721  941476 command_runner.go:130] > [crio.network]
	I1213 10:29:44.895727  941476 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1213 10:29:44.895732  941476 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1213 10:29:44.895735  941476 command_runner.go:130] > # cni_default_network = ""
	I1213 10:29:44.895741  941476 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1213 10:29:44.895745  941476 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1213 10:29:44.895751  941476 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1213 10:29:44.895754  941476 command_runner.go:130] > # plugin_dirs = [
	I1213 10:29:44.895758  941476 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1213 10:29:44.895760  941476 command_runner.go:130] > # ]
	I1213 10:29:44.895764  941476 command_runner.go:130] > # List of included pod metrics.
	I1213 10:29:44.895768  941476 command_runner.go:130] > # included_pod_metrics = [
	I1213 10:29:44.895771  941476 command_runner.go:130] > # ]
	I1213 10:29:44.895778  941476 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1213 10:29:44.895781  941476 command_runner.go:130] > [crio.metrics]
	I1213 10:29:44.895786  941476 command_runner.go:130] > # Globally enable or disable metrics support.
	I1213 10:29:44.895790  941476 command_runner.go:130] > # enable_metrics = false
	I1213 10:29:44.895794  941476 command_runner.go:130] > # Specify enabled metrics collectors.
	I1213 10:29:44.895799  941476 command_runner.go:130] > # Per default all metrics are enabled.
	I1213 10:29:44.895805  941476 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1213 10:29:44.895813  941476 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1213 10:29:44.895818  941476 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1213 10:29:44.895822  941476 command_runner.go:130] > # metrics_collectors = [
	I1213 10:29:44.895826  941476 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1213 10:29:44.895831  941476 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1213 10:29:44.895834  941476 command_runner.go:130] > # 	"containers_oom_total",
	I1213 10:29:44.895838  941476 command_runner.go:130] > # 	"processes_defunct",
	I1213 10:29:44.895842  941476 command_runner.go:130] > # 	"operations_total",
	I1213 10:29:44.895849  941476 command_runner.go:130] > # 	"operations_latency_seconds",
	I1213 10:29:44.895854  941476 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1213 10:29:44.895859  941476 command_runner.go:130] > # 	"operations_errors_total",
	I1213 10:29:44.895863  941476 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1213 10:29:44.895867  941476 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1213 10:29:44.895871  941476 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1213 10:29:44.895875  941476 command_runner.go:130] > # 	"image_pulls_success_total",
	I1213 10:29:44.895879  941476 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1213 10:29:44.895883  941476 command_runner.go:130] > # 	"containers_oom_count_total",
	I1213 10:29:44.895888  941476 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1213 10:29:44.895892  941476 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1213 10:29:44.895896  941476 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1213 10:29:44.895899  941476 command_runner.go:130] > # ]
	I1213 10:29:44.895905  941476 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1213 10:29:44.895908  941476 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1213 10:29:44.895913  941476 command_runner.go:130] > # The port on which the metrics server will listen.
	I1213 10:29:44.895917  941476 command_runner.go:130] > # metrics_port = 9090
	I1213 10:29:44.895922  941476 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1213 10:29:44.895925  941476 command_runner.go:130] > # metrics_socket = ""
	I1213 10:29:44.895930  941476 command_runner.go:130] > # The certificate for the secure metrics server.
	I1213 10:29:44.895937  941476 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1213 10:29:44.895943  941476 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1213 10:29:44.895947  941476 command_runner.go:130] > # certificate on any modification event.
	I1213 10:29:44.895951  941476 command_runner.go:130] > # metrics_cert = ""
	I1213 10:29:44.895955  941476 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1213 10:29:44.895960  941476 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1213 10:29:44.895963  941476 command_runner.go:130] > # metrics_key = ""
	I1213 10:29:44.895969  941476 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1213 10:29:44.895972  941476 command_runner.go:130] > [crio.tracing]
	I1213 10:29:44.895978  941476 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1213 10:29:44.895981  941476 command_runner.go:130] > # enable_tracing = false
	I1213 10:29:44.895987  941476 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1213 10:29:44.895991  941476 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1213 10:29:44.896000  941476 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1213 10:29:44.896007  941476 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1213 10:29:44.896011  941476 command_runner.go:130] > # CRI-O NRI configuration.
	I1213 10:29:44.896014  941476 command_runner.go:130] > [crio.nri]
	I1213 10:29:44.896018  941476 command_runner.go:130] > # Globally enable or disable NRI.
	I1213 10:29:44.896022  941476 command_runner.go:130] > # enable_nri = true
	I1213 10:29:44.896025  941476 command_runner.go:130] > # NRI socket to listen on.
	I1213 10:29:44.896030  941476 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1213 10:29:44.896034  941476 command_runner.go:130] > # NRI plugin directory to use.
	I1213 10:29:44.896038  941476 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1213 10:29:44.896043  941476 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1213 10:29:44.896051  941476 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1213 10:29:44.896057  941476 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1213 10:29:44.896113  941476 command_runner.go:130] > # nri_disable_connections = false
	I1213 10:29:44.896119  941476 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1213 10:29:44.896123  941476 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1213 10:29:44.896128  941476 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1213 10:29:44.896133  941476 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1213 10:29:44.896137  941476 command_runner.go:130] > # NRI default validator configuration.
	I1213 10:29:44.896144  941476 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1213 10:29:44.896150  941476 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1213 10:29:44.896155  941476 command_runner.go:130] > # can be restricted/rejected:
	I1213 10:29:44.896158  941476 command_runner.go:130] > # - OCI hook injection
	I1213 10:29:44.896163  941476 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1213 10:29:44.896167  941476 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1213 10:29:44.896172  941476 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1213 10:29:44.896176  941476 command_runner.go:130] > # - adjustment of linux namespaces
	I1213 10:29:44.896186  941476 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1213 10:29:44.896193  941476 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1213 10:29:44.896198  941476 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1213 10:29:44.896201  941476 command_runner.go:130] > #
	I1213 10:29:44.896205  941476 command_runner.go:130] > # [crio.nri.default_validator]
	I1213 10:29:44.896209  941476 command_runner.go:130] > # nri_enable_default_validator = false
	I1213 10:29:44.896218  941476 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1213 10:29:44.896223  941476 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1213 10:29:44.896229  941476 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1213 10:29:44.896234  941476 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1213 10:29:44.896239  941476 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1213 10:29:44.896243  941476 command_runner.go:130] > # nri_validator_required_plugins = [
	I1213 10:29:44.896245  941476 command_runner.go:130] > # ]
	I1213 10:29:44.896251  941476 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1213 10:29:44.896257  941476 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1213 10:29:44.896261  941476 command_runner.go:130] > [crio.stats]
	I1213 10:29:44.896267  941476 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1213 10:29:44.896272  941476 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1213 10:29:44.896276  941476 command_runner.go:130] > # stats_collection_period = 0
	I1213 10:29:44.896281  941476 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1213 10:29:44.896287  941476 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1213 10:29:44.896291  941476 command_runner.go:130] > # collection_period = 0
	I1213 10:29:44.896753  941476 command_runner.go:130] ! time="2025-12-13T10:29:44.865564739Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1213 10:29:44.896774  941476 command_runner.go:130] ! time="2025-12-13T10:29:44.865608538Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1213 10:29:44.896784  941476 command_runner.go:130] ! time="2025-12-13T10:29:44.865641285Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1213 10:29:44.896793  941476 command_runner.go:130] ! time="2025-12-13T10:29:44.86566636Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1213 10:29:44.896803  941476 command_runner.go:130] ! time="2025-12-13T10:29:44.865746328Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:29:44.896812  941476 command_runner.go:130] ! time="2025-12-13T10:29:44.866102466Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1213 10:29:44.896826  941476 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1213 10:29:44.896949  941476 cni.go:84] Creating CNI manager for ""
	I1213 10:29:44.896967  941476 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 10:29:44.896990  941476 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 10:29:44.897016  941476 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-200955 NodeName:functional-200955 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 10:29:44.897147  941476 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-200955"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 10:29:44.897221  941476 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 10:29:44.904800  941476 command_runner.go:130] > kubeadm
	I1213 10:29:44.904821  941476 command_runner.go:130] > kubectl
	I1213 10:29:44.904825  941476 command_runner.go:130] > kubelet
	I1213 10:29:44.905083  941476 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 10:29:44.905149  941476 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 10:29:44.912855  941476 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1213 10:29:44.926542  941476 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 10:29:44.940018  941476 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1213 10:29:44.953058  941476 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1213 10:29:44.956927  941476 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1213 10:29:44.957067  941476 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:29:45.090811  941476 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 10:29:45.111343  941476 certs.go:69] Setting up /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955 for IP: 192.168.49.2
	I1213 10:29:45.111425  941476 certs.go:195] generating shared ca certs ...
	I1213 10:29:45.111459  941476 certs.go:227] acquiring lock for ca certs: {Name:mk8a4f8a0a31c02fdf751ce601bdbbea6f5a03e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:29:45.111653  941476 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22128-904040/.minikube/ca.key
	I1213 10:29:45.111736  941476 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22128-904040/.minikube/proxy-client-ca.key
	I1213 10:29:45.111762  941476 certs.go:257] generating profile certs ...
	I1213 10:29:45.111936  941476 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/client.key
	I1213 10:29:45.112043  941476 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/apiserver.key.8da389ed
	I1213 10:29:45.112141  941476 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/proxy-client.key
	I1213 10:29:45.112183  941476 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-904040/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1213 10:29:45.112222  941476 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-904040/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1213 10:29:45.112262  941476 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-904040/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1213 10:29:45.112293  941476 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-904040/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1213 10:29:45.112328  941476 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1213 10:29:45.112371  941476 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1213 10:29:45.112404  941476 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1213 10:29:45.112444  941476 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1213 10:29:45.112521  941476 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/907484.pem (1338 bytes)
	W1213 10:29:45.112600  941476 certs.go:480] ignoring /home/jenkins/minikube-integration/22128-904040/.minikube/certs/907484_empty.pem, impossibly tiny 0 bytes
	I1213 10:29:45.112629  941476 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 10:29:45.112687  941476 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca.pem (1082 bytes)
	I1213 10:29:45.112733  941476 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/cert.pem (1123 bytes)
	I1213 10:29:45.112831  941476 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/key.pem (1675 bytes)
	I1213 10:29:45.113060  941476 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-904040/.minikube/files/etc/ssl/certs/9074842.pem (1708 bytes)
	I1213 10:29:45.113147  941476 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/907484.pem -> /usr/share/ca-certificates/907484.pem
	I1213 10:29:45.113186  941476 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-904040/.minikube/files/etc/ssl/certs/9074842.pem -> /usr/share/ca-certificates/9074842.pem
	I1213 10:29:45.113227  941476 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-904040/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:29:45.113935  941476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 10:29:45.163864  941476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 10:29:45.189286  941476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 10:29:45.237278  941476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 10:29:45.263467  941476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 10:29:45.289513  941476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 10:29:45.309018  941476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 10:29:45.329141  941476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 10:29:45.347665  941476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/certs/907484.pem --> /usr/share/ca-certificates/907484.pem (1338 bytes)
	I1213 10:29:45.365433  941476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/files/etc/ssl/certs/9074842.pem --> /usr/share/ca-certificates/9074842.pem (1708 bytes)
	I1213 10:29:45.383209  941476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 10:29:45.402144  941476 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 10:29:45.415520  941476 ssh_runner.go:195] Run: openssl version
	I1213 10:29:45.421431  941476 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1213 10:29:45.421939  941476 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/907484.pem
	I1213 10:29:45.429504  941476 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/907484.pem /etc/ssl/certs/907484.pem
	I1213 10:29:45.436991  941476 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/907484.pem
	I1213 10:29:45.440561  941476 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 13 10:21 /usr/share/ca-certificates/907484.pem
	I1213 10:29:45.440796  941476 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 10:21 /usr/share/ca-certificates/907484.pem
	I1213 10:29:45.440864  941476 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/907484.pem
	I1213 10:29:45.483791  941476 command_runner.go:130] > 51391683
	I1213 10:29:45.484209  941476 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 10:29:45.491520  941476 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9074842.pem
	I1213 10:29:45.498932  941476 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9074842.pem /etc/ssl/certs/9074842.pem
	I1213 10:29:45.509018  941476 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9074842.pem
	I1213 10:29:45.513215  941476 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 13 10:21 /usr/share/ca-certificates/9074842.pem
	I1213 10:29:45.513301  941476 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 10:21 /usr/share/ca-certificates/9074842.pem
	I1213 10:29:45.513386  941476 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9074842.pem
	I1213 10:29:45.554662  941476 command_runner.go:130] > 3ec20f2e
	I1213 10:29:45.555104  941476 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 10:29:45.562598  941476 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:29:45.570035  941476 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 10:29:45.578308  941476 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:29:45.582322  941476 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 13 10:11 /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:29:45.582399  941476 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 10:11 /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:29:45.582459  941476 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:29:45.623357  941476 command_runner.go:130] > b5213941
	I1213 10:29:45.623846  941476 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 10:29:45.631423  941476 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 10:29:45.635203  941476 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 10:29:45.635226  941476 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1213 10:29:45.635232  941476 command_runner.go:130] > Device: 259,1	Inode: 1052598     Links: 1
	I1213 10:29:45.635239  941476 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1213 10:29:45.635245  941476 command_runner.go:130] > Access: 2025-12-13 10:25:37.832562674 +0000
	I1213 10:29:45.635250  941476 command_runner.go:130] > Modify: 2025-12-13 10:21:33.766304384 +0000
	I1213 10:29:45.635255  941476 command_runner.go:130] > Change: 2025-12-13 10:21:33.766304384 +0000
	I1213 10:29:45.635260  941476 command_runner.go:130] >  Birth: 2025-12-13 10:21:33.766304384 +0000
	I1213 10:29:45.635337  941476 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 10:29:45.676331  941476 command_runner.go:130] > Certificate will not expire
	I1213 10:29:45.676780  941476 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 10:29:45.719984  941476 command_runner.go:130] > Certificate will not expire
	I1213 10:29:45.720440  941476 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 10:29:45.763044  941476 command_runner.go:130] > Certificate will not expire
	I1213 10:29:45.763152  941476 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 10:29:45.804752  941476 command_runner.go:130] > Certificate will not expire
	I1213 10:29:45.805187  941476 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 10:29:45.846806  941476 command_runner.go:130] > Certificate will not expire
	I1213 10:29:45.847203  941476 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 10:29:45.898203  941476 command_runner.go:130] > Certificate will not expire
	I1213 10:29:45.898680  941476 kubeadm.go:401] StartCluster: {Name:functional-200955 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-200955 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:29:45.898809  941476 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 10:29:45.898933  941476 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 10:29:45.924889  941476 cri.go:89] found id: ""
	I1213 10:29:45.924989  941476 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 10:29:45.932161  941476 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1213 10:29:45.932226  941476 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1213 10:29:45.932248  941476 command_runner.go:130] > /var/lib/minikube/etcd:
	I1213 10:29:45.933123  941476 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 10:29:45.933177  941476 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 10:29:45.933244  941476 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 10:29:45.940638  941476 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 10:29:45.941072  941476 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-200955" does not appear in /home/jenkins/minikube-integration/22128-904040/kubeconfig
	I1213 10:29:45.941185  941476 kubeconfig.go:62] /home/jenkins/minikube-integration/22128-904040/kubeconfig needs updating (will repair): [kubeconfig missing "functional-200955" cluster setting kubeconfig missing "functional-200955" context setting]
	I1213 10:29:45.941452  941476 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-904040/kubeconfig: {Name:mk623f80012ba74b924bdfcf4e2ec5178c2702f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:29:45.941955  941476 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22128-904040/kubeconfig
	I1213 10:29:45.942103  941476 kapi.go:59] client config for functional-200955: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/client.crt", KeyFile:"/home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/client.key", CAFile:"/home/jenkins/minikube-integration/22128-904040/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 10:29:45.942644  941476 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1213 10:29:45.942668  941476 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1213 10:29:45.942678  941476 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1213 10:29:45.942683  941476 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1213 10:29:45.942687  941476 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1213 10:29:45.942727  941476 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1213 10:29:45.943068  941476 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 10:29:45.951089  941476 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1213 10:29:45.951121  941476 kubeadm.go:602] duration metric: took 17.93243ms to restartPrimaryControlPlane
	I1213 10:29:45.951143  941476 kubeadm.go:403] duration metric: took 52.461003ms to StartCluster
	I1213 10:29:45.951159  941476 settings.go:142] acquiring lock: {Name:mk93988d167ba25bb331a8426f9b2f4ef25dd844 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:29:45.951223  941476 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22128-904040/kubeconfig
	I1213 10:29:45.951796  941476 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-904040/kubeconfig: {Name:mk623f80012ba74b924bdfcf4e2ec5178c2702f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:29:45.951989  941476 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 10:29:45.952368  941476 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 10:29:45.952448  941476 addons.go:70] Setting storage-provisioner=true in profile "functional-200955"
	I1213 10:29:45.952463  941476 addons.go:239] Setting addon storage-provisioner=true in "functional-200955"
	I1213 10:29:45.952488  941476 host.go:66] Checking if "functional-200955" exists ...
	I1213 10:29:45.952566  941476 config.go:182] Loaded profile config "functional-200955": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 10:29:45.952610  941476 addons.go:70] Setting default-storageclass=true in profile "functional-200955"
	I1213 10:29:45.952623  941476 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-200955"
	I1213 10:29:45.952911  941476 cli_runner.go:164] Run: docker container inspect functional-200955 --format={{.State.Status}}
	I1213 10:29:45.952951  941476 cli_runner.go:164] Run: docker container inspect functional-200955 --format={{.State.Status}}
	I1213 10:29:45.958523  941476 out.go:179] * Verifying Kubernetes components...
	I1213 10:29:45.963377  941476 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:29:45.989193  941476 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22128-904040/kubeconfig
	I1213 10:29:45.989357  941476 kapi.go:59] client config for functional-200955: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/client.crt", KeyFile:"/home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/client.key", CAFile:"/home/jenkins/minikube-integration/22128-904040/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 10:29:45.989643  941476 addons.go:239] Setting addon default-storageclass=true in "functional-200955"
	I1213 10:29:45.989674  941476 host.go:66] Checking if "functional-200955" exists ...
	I1213 10:29:45.990084  941476 cli_runner.go:164] Run: docker container inspect functional-200955 --format={{.State.Status}}
	I1213 10:29:45.996374  941476 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 10:29:45.999301  941476 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:29:45.999325  941476 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 10:29:45.999389  941476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-200955
	I1213 10:29:46.025120  941476 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 10:29:46.025146  941476 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 10:29:46.025210  941476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-200955
	I1213 10:29:46.047237  941476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33523 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/functional-200955/id_rsa Username:docker}
	I1213 10:29:46.065614  941476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33523 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/functional-200955/id_rsa Username:docker}
	I1213 10:29:46.182514  941476 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 10:29:46.188367  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:29:46.228034  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:29:46.975760  941476 node_ready.go:35] waiting up to 6m0s for node "functional-200955" to be "Ready" ...
	I1213 10:29:46.975884  941476 type.go:168] "Request Body" body=""
	I1213 10:29:46.975940  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:46.976159  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:29:46.976214  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:46.976242  941476 retry.go:31] will retry after 310.714541ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:46.976276  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:29:46.976296  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:46.976306  941476 retry.go:31] will retry after 212.322267ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:46.976367  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:29:47.188794  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:29:47.245508  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:29:47.249207  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:47.249253  941476 retry.go:31] will retry after 232.449188ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:47.287510  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:29:47.352377  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:29:47.355988  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:47.356022  941476 retry.go:31] will retry after 216.845813ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:47.476461  941476 type.go:168] "Request Body" body=""
	I1213 10:29:47.476540  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:47.476866  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:29:47.482125  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:29:47.540633  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:29:47.540674  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:47.540713  941476 retry.go:31] will retry after 621.150122ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:47.573847  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:29:47.632148  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:29:47.632198  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:47.632239  941476 retry.go:31] will retry after 652.105841ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:47.976625  941476 type.go:168] "Request Body" body=""
	I1213 10:29:47.976714  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:47.977047  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:29:48.162374  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:29:48.224014  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:29:48.224050  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:48.224096  941476 retry.go:31] will retry after 486.360631ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:48.285241  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:29:48.341512  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:29:48.345196  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:48.345232  941476 retry.go:31] will retry after 851.054667ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:48.476501  941476 type.go:168] "Request Body" body=""
	I1213 10:29:48.476654  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:48.477264  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:29:48.710766  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:29:48.774597  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:29:48.774656  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:48.774677  941476 retry.go:31] will retry after 1.42902923s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:48.976030  941476 type.go:168] "Request Body" body=""
	I1213 10:29:48.976124  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:48.976473  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:29:48.976568  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:29:49.197102  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:29:49.269601  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:29:49.269709  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:49.269757  941476 retry.go:31] will retry after 1.296706305s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:49.476109  941476 type.go:168] "Request Body" body=""
	I1213 10:29:49.476203  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:49.476573  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:29:49.976081  941476 type.go:168] "Request Body" body=""
	I1213 10:29:49.976179  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:49.976442  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:29:50.204048  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:29:50.263787  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:29:50.263835  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:50.263857  941476 retry.go:31] will retry after 2.257067811s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:50.476081  941476 type.go:168] "Request Body" body=""
	I1213 10:29:50.476171  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:50.476455  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:29:50.566907  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:29:50.629271  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:29:50.629314  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:50.629333  941476 retry.go:31] will retry after 1.765407868s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:50.976841  941476 type.go:168] "Request Body" body=""
	I1213 10:29:50.976923  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:50.977217  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:29:50.977269  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:29:51.475933  941476 type.go:168] "Request Body" body=""
	I1213 10:29:51.476012  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:51.476290  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:29:51.976028  941476 type.go:168] "Request Body" body=""
	I1213 10:29:51.976124  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:51.976454  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:29:52.395020  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:29:52.456823  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:29:52.456875  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:52.456899  941476 retry.go:31] will retry after 1.561909689s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:52.476063  941476 type.go:168] "Request Body" body=""
	I1213 10:29:52.476147  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:52.476449  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:29:52.521915  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:29:52.578203  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:29:52.581870  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:52.581904  941476 retry.go:31] will retry after 3.834800834s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:52.976296  941476 type.go:168] "Request Body" body=""
	I1213 10:29:52.976371  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:52.976640  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:29:53.476019  941476 type.go:168] "Request Body" body=""
	I1213 10:29:53.476103  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:53.476429  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:29:53.476481  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:29:53.976156  941476 type.go:168] "Request Body" body=""
	I1213 10:29:53.976238  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:53.976665  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:29:54.019913  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:29:54.081795  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:29:54.081851  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:54.081875  941476 retry.go:31] will retry after 4.858817388s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:54.476105  941476 type.go:168] "Request Body" body=""
	I1213 10:29:54.476182  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:54.476432  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:29:54.976004  941476 type.go:168] "Request Body" body=""
	I1213 10:29:54.976093  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:54.976415  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:29:55.476129  941476 type.go:168] "Request Body" body=""
	I1213 10:29:55.476226  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:55.476527  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:29:55.476588  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:29:55.976456  941476 type.go:168] "Request Body" body=""
	I1213 10:29:55.976520  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:55.976761  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:29:56.417572  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:29:56.476035  941476 type.go:168] "Request Body" body=""
	I1213 10:29:56.476118  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:56.476423  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:56.476511  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:29:56.480436  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:56.480483  941476 retry.go:31] will retry after 4.792687173s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:56.976051  941476 type.go:168] "Request Body" body=""
	I1213 10:29:56.976145  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:56.976494  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:29:57.475977  941476 type.go:168] "Request Body" body=""
	I1213 10:29:57.476051  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:57.476378  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:29:57.976104  941476 type.go:168] "Request Body" body=""
	I1213 10:29:57.976249  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:57.976601  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:29:57.976655  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:29:58.476178  941476 type.go:168] "Request Body" body=""
	I1213 10:29:58.476277  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:58.476612  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:29:58.940954  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:29:58.976372  941476 type.go:168] "Request Body" body=""
	I1213 10:29:58.976458  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:58.976716  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:29:59.010699  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:29:59.010740  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:59.010759  941476 retry.go:31] will retry after 7.734765537s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:59.476520  941476 type.go:168] "Request Body" body=""
	I1213 10:29:59.476594  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:59.476930  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:29:59.976794  941476 type.go:168] "Request Body" body=""
	I1213 10:29:59.976872  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:59.977198  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:29:59.977252  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:00.476972  941476 type.go:168] "Request Body" body=""
	I1213 10:30:00.477066  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:00.477383  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:00.976114  941476 type.go:168] "Request Body" body=""
	I1213 10:30:00.976196  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:00.976547  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:01.274155  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:30:01.347774  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:30:01.347813  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:30:01.347834  941476 retry.go:31] will retry after 9.325183697s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:30:01.478515  941476 type.go:168] "Request Body" body=""
	I1213 10:30:01.478628  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:01.479014  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:01.976839  941476 type.go:168] "Request Body" body=""
	I1213 10:30:01.976947  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:01.977331  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:01.977404  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:02.476030  941476 type.go:168] "Request Body" body=""
	I1213 10:30:02.476139  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:02.476537  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:02.976170  941476 type.go:168] "Request Body" body=""
	I1213 10:30:02.976275  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:02.976649  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:03.476192  941476 type.go:168] "Request Body" body=""
	I1213 10:30:03.476276  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:03.476538  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:03.976228  941476 type.go:168] "Request Body" body=""
	I1213 10:30:03.976352  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:03.976726  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:04.476318  941476 type.go:168] "Request Body" body=""
	I1213 10:30:04.476410  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:04.476740  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:04.476799  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:04.976561  941476 type.go:168] "Request Body" body=""
	I1213 10:30:04.976631  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:04.976878  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:05.476699  941476 type.go:168] "Request Body" body=""
	I1213 10:30:05.476787  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:05.477120  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:05.977016  941476 type.go:168] "Request Body" body=""
	I1213 10:30:05.977144  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:05.977510  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:06.475991  941476 type.go:168] "Request Body" body=""
	I1213 10:30:06.476060  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:06.476330  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:06.746112  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:30:06.805144  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:30:06.808651  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:30:06.808685  941476 retry.go:31] will retry after 7.088599712s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:30:06.976026  941476 type.go:168] "Request Body" body=""
	I1213 10:30:06.976116  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:06.976437  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:06.976507  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:07.476202  941476 type.go:168] "Request Body" body=""
	I1213 10:30:07.476279  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:07.476634  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:07.976084  941476 type.go:168] "Request Body" body=""
	I1213 10:30:07.976170  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:07.976444  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:08.476024  941476 type.go:168] "Request Body" body=""
	I1213 10:30:08.476153  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:08.476482  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:08.976213  941476 type.go:168] "Request Body" body=""
	I1213 10:30:08.976308  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:08.976642  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:08.976701  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:09.476115  941476 type.go:168] "Request Body" body=""
	I1213 10:30:09.476212  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:09.476464  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:09.976032  941476 type.go:168] "Request Body" body=""
	I1213 10:30:09.976107  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:09.976492  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:10.476265  941476 type.go:168] "Request Body" body=""
	I1213 10:30:10.476368  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:10.476715  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:10.673230  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:30:10.732312  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:30:10.736051  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:30:10.736087  941476 retry.go:31] will retry after 8.123592788s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:30:10.976475  941476 type.go:168] "Request Body" body=""
	I1213 10:30:10.976550  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:10.976847  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:10.976888  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:11.476725  941476 type.go:168] "Request Body" body=""
	I1213 10:30:11.476822  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:11.477169  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:11.976044  941476 type.go:168] "Request Body" body=""
	I1213 10:30:11.976120  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:11.976458  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:12.476202  941476 type.go:168] "Request Body" body=""
	I1213 10:30:12.476278  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:12.476542  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:12.976059  941476 type.go:168] "Request Body" body=""
	I1213 10:30:12.976141  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:12.976473  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:13.476058  941476 type.go:168] "Request Body" body=""
	I1213 10:30:13.476137  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:13.476490  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:13.476548  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:13.898101  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:30:13.964340  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:30:13.967836  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:30:13.967879  941476 retry.go:31] will retry after 8.492520723s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:30:13.975991  941476 type.go:168] "Request Body" body=""
	I1213 10:30:13.976068  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:13.976327  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:14.476033  941476 type.go:168] "Request Body" body=""
	I1213 10:30:14.476139  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:14.476442  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:14.976067  941476 type.go:168] "Request Body" body=""
	I1213 10:30:14.976142  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:14.976454  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:15.475986  941476 type.go:168] "Request Body" body=""
	I1213 10:30:15.476080  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:15.476459  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:15.975941  941476 type.go:168] "Request Body" body=""
	I1213 10:30:15.976026  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:15.976392  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:15.976452  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:16.476065  941476 type.go:168] "Request Body" body=""
	I1213 10:30:16.476159  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:16.476450  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:16.975992  941476 type.go:168] "Request Body" body=""
	I1213 10:30:16.976102  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:16.976412  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:17.476049  941476 type.go:168] "Request Body" body=""
	I1213 10:30:17.476174  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:17.476445  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:17.976100  941476 type.go:168] "Request Body" body=""
	I1213 10:30:17.976180  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:17.976600  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:17.976654  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:18.475986  941476 type.go:168] "Request Body" body=""
	I1213 10:30:18.476079  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:18.476393  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:18.859953  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:30:18.916800  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:30:18.920763  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:30:18.920813  941476 retry.go:31] will retry after 11.17407044s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:30:18.976006  941476 type.go:168] "Request Body" body=""
	I1213 10:30:18.976089  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:18.976434  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:19.476057  941476 type.go:168] "Request Body" body=""
	I1213 10:30:19.476156  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:19.476511  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:19.975977  941476 type.go:168] "Request Body" body=""
	I1213 10:30:19.976055  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:19.976310  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:20.476019  941476 type.go:168] "Request Body" body=""
	I1213 10:30:20.476128  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:20.476491  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:20.476556  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:20.976222  941476 type.go:168] "Request Body" body=""
	I1213 10:30:20.976298  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:20.976627  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:21.476132  941476 type.go:168] "Request Body" body=""
	I1213 10:30:21.476230  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:21.476520  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:21.976457  941476 type.go:168] "Request Body" body=""
	I1213 10:30:21.976534  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:21.976932  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:22.460571  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:30:22.476131  941476 type.go:168] "Request Body" body=""
	I1213 10:30:22.476203  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:22.476465  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:22.521379  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:30:22.525059  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:30:22.525092  941476 retry.go:31] will retry after 25.139993985s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:30:22.976652  941476 type.go:168] "Request Body" body=""
	I1213 10:30:22.976730  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:22.976986  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:22.977026  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:23.476843  941476 type.go:168] "Request Body" body=""
	I1213 10:30:23.476919  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:23.477283  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:23.975970  941476 type.go:168] "Request Body" body=""
	I1213 10:30:23.976053  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:23.976449  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:24.476161  941476 type.go:168] "Request Body" body=""
	I1213 10:30:24.476245  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:24.476513  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:24.976034  941476 type.go:168] "Request Body" body=""
	I1213 10:30:24.976147  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:24.976481  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:25.476274  941476 type.go:168] "Request Body" body=""
	I1213 10:30:25.476347  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:25.476670  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:25.476736  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:25.976627  941476 type.go:168] "Request Body" body=""
	I1213 10:30:25.976707  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:25.976951  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:26.476709  941476 type.go:168] "Request Body" body=""
	I1213 10:30:26.476781  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:26.477095  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:26.976010  941476 type.go:168] "Request Body" body=""
	I1213 10:30:26.976085  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:26.976390  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:27.475998  941476 type.go:168] "Request Body" body=""
	I1213 10:30:27.476197  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:27.476524  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:27.976149  941476 type.go:168] "Request Body" body=""
	I1213 10:30:27.976232  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:27.976587  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:27.976691  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:28.476061  941476 type.go:168] "Request Body" body=""
	I1213 10:30:28.476140  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:28.476466  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:28.975994  941476 type.go:168] "Request Body" body=""
	I1213 10:30:28.976062  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:28.976382  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:29.475998  941476 type.go:168] "Request Body" body=""
	I1213 10:30:29.476091  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:29.476426  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:29.976195  941476 type.go:168] "Request Body" body=""
	I1213 10:30:29.976285  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:29.976645  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:30.096045  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:30:30.160844  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:30:30.160891  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:30:30.160917  941476 retry.go:31] will retry after 23.835716192s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:30:30.476291  941476 type.go:168] "Request Body" body=""
	I1213 10:30:30.476381  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:30.476623  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:30.476662  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:30.976005  941476 type.go:168] "Request Body" body=""
	I1213 10:30:30.976079  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:30.976448  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:31.475993  941476 type.go:168] "Request Body" body=""
	I1213 10:30:31.476105  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:31.476447  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:31.976396  941476 type.go:168] "Request Body" body=""
	I1213 10:30:31.976460  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:31.976719  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:32.476555  941476 type.go:168] "Request Body" body=""
	I1213 10:30:32.476640  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:32.476947  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:32.476999  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:32.976734  941476 type.go:168] "Request Body" body=""
	I1213 10:30:32.976812  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:32.977150  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:33.476870  941476 type.go:168] "Request Body" body=""
	I1213 10:30:33.476937  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:33.477226  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:33.975973  941476 type.go:168] "Request Body" body=""
	I1213 10:30:33.976043  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:33.976419  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:34.476027  941476 type.go:168] "Request Body" body=""
	I1213 10:30:34.476101  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:34.476510  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:34.975996  941476 type.go:168] "Request Body" body=""
	I1213 10:30:34.976068  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:34.976382  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:34.976435  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:35.476032  941476 type.go:168] "Request Body" body=""
	I1213 10:30:35.476153  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:35.476480  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:35.975911  941476 type.go:168] "Request Body" body=""
	I1213 10:30:35.975996  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:35.976291  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:36.475989  941476 type.go:168] "Request Body" body=""
	I1213 10:30:36.476057  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:36.476387  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:36.976488  941476 type.go:168] "Request Body" body=""
	I1213 10:30:36.976570  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:36.976951  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:36.977012  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:37.476783  941476 type.go:168] "Request Body" body=""
	I1213 10:30:37.476896  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:37.477216  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:37.975936  941476 type.go:168] "Request Body" body=""
	I1213 10:30:37.976015  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:37.976268  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:38.476001  941476 type.go:168] "Request Body" body=""
	I1213 10:30:38.476091  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:38.476424  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:38.976022  941476 type.go:168] "Request Body" body=""
	I1213 10:30:38.976096  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:38.976428  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:39.476003  941476 type.go:168] "Request Body" body=""
	I1213 10:30:39.476084  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:39.476362  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:39.476402  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:39.975986  941476 type.go:168] "Request Body" body=""
	I1213 10:30:39.976076  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:39.976383  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:40.476036  941476 type.go:168] "Request Body" body=""
	I1213 10:30:40.476132  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:40.476454  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:40.976176  941476 type.go:168] "Request Body" body=""
	I1213 10:30:40.976252  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:40.976500  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:41.476026  941476 type.go:168] "Request Body" body=""
	I1213 10:30:41.476124  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:41.476456  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:41.476514  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:41.976469  941476 type.go:168] "Request Body" body=""
	I1213 10:30:41.976585  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:41.976895  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:42.476663  941476 type.go:168] "Request Body" body=""
	I1213 10:30:42.476728  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:42.477006  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:42.976839  941476 type.go:168] "Request Body" body=""
	I1213 10:30:42.976919  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:42.980297  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 10:30:43.476086  941476 type.go:168] "Request Body" body=""
	I1213 10:30:43.476186  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:43.476547  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:43.476623  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:43.976205  941476 type.go:168] "Request Body" body=""
	I1213 10:30:43.976276  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:43.976547  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:44.476019  941476 type.go:168] "Request Body" body=""
	I1213 10:30:44.476113  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:44.476466  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:44.976036  941476 type.go:168] "Request Body" body=""
	I1213 10:30:44.976111  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:44.976440  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:45.475987  941476 type.go:168] "Request Body" body=""
	I1213 10:30:45.476056  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:45.476331  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:45.975925  941476 type.go:168] "Request Body" body=""
	I1213 10:30:45.976003  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:45.976327  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:45.976382  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:46.476024  941476 type.go:168] "Request Body" body=""
	I1213 10:30:46.476103  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:46.476455  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:46.976213  941476 type.go:168] "Request Body" body=""
	I1213 10:30:46.976285  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:46.976538  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:47.475994  941476 type.go:168] "Request Body" body=""
	I1213 10:30:47.476069  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:47.476399  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:47.665860  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:30:47.731394  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:30:47.731441  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:30:47.731460  941476 retry.go:31] will retry after 19.194003802s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:30:47.975899  941476 type.go:168] "Request Body" body=""
	I1213 10:30:47.975974  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:47.976303  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:48.475998  941476 type.go:168] "Request Body" body=""
	I1213 10:30:48.476084  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:48.476410  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:48.476469  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:48.976020  941476 type.go:168] "Request Body" body=""
	I1213 10:30:48.976114  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:48.976440  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:49.476036  941476 type.go:168] "Request Body" body=""
	I1213 10:30:49.476112  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:49.476434  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:49.976100  941476 type.go:168] "Request Body" body=""
	I1213 10:30:49.976167  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:49.976437  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:50.476044  941476 type.go:168] "Request Body" body=""
	I1213 10:30:50.476115  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:50.476434  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:50.976044  941476 type.go:168] "Request Body" body=""
	I1213 10:30:50.976126  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:50.976458  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:50.976519  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:51.476190  941476 type.go:168] "Request Body" body=""
	I1213 10:30:51.476257  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:51.476607  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:51.976524  941476 type.go:168] "Request Body" body=""
	I1213 10:30:51.976618  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:51.976938  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:52.476676  941476 type.go:168] "Request Body" body=""
	I1213 10:30:52.476768  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:52.477095  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:52.976699  941476 type.go:168] "Request Body" body=""
	I1213 10:30:52.976774  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:52.977061  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:52.977104  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:53.476895  941476 type.go:168] "Request Body" body=""
	I1213 10:30:53.476971  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:53.477260  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:53.975931  941476 type.go:168] "Request Body" body=""
	I1213 10:30:53.976008  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:53.976338  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:53.997712  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:30:54.059604  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:30:54.063660  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:30:54.063694  941476 retry.go:31] will retry after 30.126310408s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:30:54.475958  941476 type.go:168] "Request Body" body=""
	I1213 10:30:54.476070  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:54.476392  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:54.976060  941476 type.go:168] "Request Body" body=""
	I1213 10:30:54.976148  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:54.976488  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:55.476185  941476 type.go:168] "Request Body" body=""
	I1213 10:30:55.476260  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:55.476583  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:55.476642  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:55.976527  941476 type.go:168] "Request Body" body=""
	I1213 10:30:55.976599  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:55.976860  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:56.476675  941476 type.go:168] "Request Body" body=""
	I1213 10:30:56.476769  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:56.477141  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:56.976045  941476 type.go:168] "Request Body" body=""
	I1213 10:30:56.976119  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:56.976449  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:57.476156  941476 type.go:168] "Request Body" body=""
	I1213 10:30:57.476236  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:57.476486  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:57.976022  941476 type.go:168] "Request Body" body=""
	I1213 10:30:57.976100  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:57.976440  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:57.976502  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:58.476042  941476 type.go:168] "Request Body" body=""
	I1213 10:30:58.476124  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:58.476455  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:58.976150  941476 type.go:168] "Request Body" body=""
	I1213 10:30:58.976235  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:58.976490  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:59.476182  941476 type.go:168] "Request Body" body=""
	I1213 10:30:59.476288  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:59.476621  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:59.976365  941476 type.go:168] "Request Body" body=""
	I1213 10:30:59.976444  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:59.976775  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:59.976845  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:00.476639  941476 type.go:168] "Request Body" body=""
	I1213 10:31:00.476719  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:00.477025  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:00.976827  941476 type.go:168] "Request Body" body=""
	I1213 10:31:00.976918  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:00.977328  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:01.475937  941476 type.go:168] "Request Body" body=""
	I1213 10:31:01.476035  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:01.476377  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:01.976053  941476 type.go:168] "Request Body" body=""
	I1213 10:31:01.976138  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:01.976399  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:02.476026  941476 type.go:168] "Request Body" body=""
	I1213 10:31:02.476102  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:02.476453  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:02.476508  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:02.976184  941476 type.go:168] "Request Body" body=""
	I1213 10:31:02.976261  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:02.976604  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:03.476321  941476 type.go:168] "Request Body" body=""
	I1213 10:31:03.476405  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:03.476656  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:03.975989  941476 type.go:168] "Request Body" body=""
	I1213 10:31:03.976062  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:03.976373  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:04.476027  941476 type.go:168] "Request Body" body=""
	I1213 10:31:04.476107  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:04.476440  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:04.976145  941476 type.go:168] "Request Body" body=""
	I1213 10:31:04.976215  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:04.976528  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:04.976587  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:05.476046  941476 type.go:168] "Request Body" body=""
	I1213 10:31:05.476128  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:05.476503  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:05.976329  941476 type.go:168] "Request Body" body=""
	I1213 10:31:05.976404  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:05.976818  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:06.476644  941476 type.go:168] "Request Body" body=""
	I1213 10:31:06.476727  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:06.476990  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:06.925824  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:31:06.976406  941476 type.go:168] "Request Body" body=""
	I1213 10:31:06.976485  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:06.976757  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:06.976800  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:06.991385  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:06.991438  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:06.991540  941476 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 10:31:07.476000  941476 type.go:168] "Request Body" body=""
	I1213 10:31:07.476110  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:07.476475  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:07.976033  941476 type.go:168] "Request Body" body=""
	I1213 10:31:07.976116  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:07.976413  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:08.476065  941476 type.go:168] "Request Body" body=""
	I1213 10:31:08.476162  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:08.476480  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:08.976217  941476 type.go:168] "Request Body" body=""
	I1213 10:31:08.976318  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:08.976675  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:09.476351  941476 type.go:168] "Request Body" body=""
	I1213 10:31:09.476424  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:09.476761  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:09.476820  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:09.976571  941476 type.go:168] "Request Body" body=""
	I1213 10:31:09.976678  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:09.977059  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:10.476721  941476 type.go:168] "Request Body" body=""
	I1213 10:31:10.476799  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:10.477208  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:10.975925  941476 type.go:168] "Request Body" body=""
	I1213 10:31:10.975997  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:10.976250  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:11.475973  941476 type.go:168] "Request Body" body=""
	I1213 10:31:11.476050  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:11.476395  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:11.976476  941476 type.go:168] "Request Body" body=""
	I1213 10:31:11.976551  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:11.976955  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:11.977016  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:12.476754  941476 type.go:168] "Request Body" body=""
	I1213 10:31:12.476839  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:12.477117  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:12.976506  941476 type.go:168] "Request Body" body=""
	I1213 10:31:12.976583  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:12.976915  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:13.476748  941476 type.go:168] "Request Body" body=""
	I1213 10:31:13.476846  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:13.477198  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:13.975894  941476 type.go:168] "Request Body" body=""
	I1213 10:31:13.975961  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:13.976227  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:14.475943  941476 type.go:168] "Request Body" body=""
	I1213 10:31:14.476062  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:14.476400  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:14.476469  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:14.976011  941476 type.go:168] "Request Body" body=""
	I1213 10:31:14.976112  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:14.976509  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:15.476219  941476 type.go:168] "Request Body" body=""
	I1213 10:31:15.476292  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:15.476567  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:15.976650  941476 type.go:168] "Request Body" body=""
	I1213 10:31:15.976734  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:15.977073  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:16.476854  941476 type.go:168] "Request Body" body=""
	I1213 10:31:16.476948  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:16.477273  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:16.477330  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:16.976000  941476 type.go:168] "Request Body" body=""
	I1213 10:31:16.976073  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:16.976427  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:17.476545  941476 type.go:168] "Request Body" body=""
	I1213 10:31:17.476677  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:17.477181  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:17.976852  941476 type.go:168] "Request Body" body=""
	I1213 10:31:17.976935  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:17.977261  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:18.475978  941476 type.go:168] "Request Body" body=""
	I1213 10:31:18.476056  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:18.476322  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:18.976069  941476 type.go:168] "Request Body" body=""
	I1213 10:31:18.976149  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:18.976500  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:18.976571  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:19.476245  941476 type.go:168] "Request Body" body=""
	I1213 10:31:19.476328  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:19.476669  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:19.976355  941476 type.go:168] "Request Body" body=""
	I1213 10:31:19.976423  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:19.976681  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:20.476070  941476 type.go:168] "Request Body" body=""
	I1213 10:31:20.476146  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:20.476464  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:20.976239  941476 type.go:168] "Request Body" body=""
	I1213 10:31:20.976313  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:20.976664  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:20.976722  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:21.476108  941476 type.go:168] "Request Body" body=""
	I1213 10:31:21.476196  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:21.476546  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:21.976459  941476 type.go:168] "Request Body" body=""
	I1213 10:31:21.976535  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:21.976854  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:22.476728  941476 type.go:168] "Request Body" body=""
	I1213 10:31:22.476820  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:22.477138  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:22.976866  941476 type.go:168] "Request Body" body=""
	I1213 10:31:22.976937  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:22.977188  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:22.977229  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:23.475910  941476 type.go:168] "Request Body" body=""
	I1213 10:31:23.475992  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:23.476337  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:23.976033  941476 type.go:168] "Request Body" body=""
	I1213 10:31:23.976146  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:23.976483  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:24.190915  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:31:24.248888  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:24.248934  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:24.249045  941476 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 10:31:24.254122  941476 out.go:179] * Enabled addons: 
	I1213 10:31:24.256914  941476 addons.go:530] duration metric: took 1m38.304545325s for enable addons: enabled=[]
	I1213 10:31:24.476214  941476 type.go:168] "Request Body" body=""
	I1213 10:31:24.476305  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:24.476571  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:24.976075  941476 type.go:168] "Request Body" body=""
	I1213 10:31:24.976150  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:24.976469  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:25.475994  941476 type.go:168] "Request Body" body=""
	I1213 10:31:25.476100  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:25.476424  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:25.476482  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:25.976304  941476 type.go:168] "Request Body" body=""
	I1213 10:31:25.976372  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:25.976622  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:26.476058  941476 type.go:168] "Request Body" body=""
	I1213 10:31:26.476134  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:26.476464  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:26.976032  941476 type.go:168] "Request Body" body=""
	I1213 10:31:26.976107  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:26.976412  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:27.475988  941476 type.go:168] "Request Body" body=""
	I1213 10:31:27.476056  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:27.476317  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:27.976108  941476 type.go:168] "Request Body" body=""
	I1213 10:31:27.976196  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:27.976535  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:27.976591  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:28.476254  941476 type.go:168] "Request Body" body=""
	I1213 10:31:28.476381  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:28.476716  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:28.975973  941476 type.go:168] "Request Body" body=""
	I1213 10:31:28.976047  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:28.976353  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:29.476048  941476 type.go:168] "Request Body" body=""
	I1213 10:31:29.476126  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:29.476474  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:29.976170  941476 type.go:168] "Request Body" body=""
	I1213 10:31:29.976247  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:29.976617  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:29.976678  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:30.476323  941476 type.go:168] "Request Body" body=""
	I1213 10:31:30.476391  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:30.476664  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:30.976054  941476 type.go:168] "Request Body" body=""
	I1213 10:31:30.976128  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:30.976456  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:31.476168  941476 type.go:168] "Request Body" body=""
	I1213 10:31:31.476269  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:31.476567  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:31.976505  941476 type.go:168] "Request Body" body=""
	I1213 10:31:31.976574  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:31.976850  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:31.976891  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:32.476715  941476 type.go:168] "Request Body" body=""
	I1213 10:31:32.476794  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:32.477154  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:32.976964  941476 type.go:168] "Request Body" body=""
	I1213 10:31:32.977041  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:32.977388  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:33.476013  941476 type.go:168] "Request Body" body=""
	I1213 10:31:33.476079  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:33.476329  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:33.976034  941476 type.go:168] "Request Body" body=""
	I1213 10:31:33.976119  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:33.976457  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:34.476014  941476 type.go:168] "Request Body" body=""
	I1213 10:31:34.476100  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:34.476438  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:34.476494  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:34.976011  941476 type.go:168] "Request Body" body=""
	I1213 10:31:34.976087  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:34.976342  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:35.476018  941476 type.go:168] "Request Body" body=""
	I1213 10:31:35.476143  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:35.476462  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:35.976397  941476 type.go:168] "Request Body" body=""
	I1213 10:31:35.976481  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:35.976852  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:36.476416  941476 type.go:168] "Request Body" body=""
	I1213 10:31:36.476490  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:36.476745  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:36.476785  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:36.976682  941476 type.go:168] "Request Body" body=""
	I1213 10:31:36.976776  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:36.977178  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:37.476965  941476 type.go:168] "Request Body" body=""
	I1213 10:31:37.477045  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:37.477383  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:37.976029  941476 type.go:168] "Request Body" body=""
	I1213 10:31:37.976095  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:37.976361  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:38.476025  941476 type.go:168] "Request Body" body=""
	I1213 10:31:38.476107  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:38.476445  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:38.975981  941476 type.go:168] "Request Body" body=""
	I1213 10:31:38.976069  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:38.976409  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:38.976469  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:39.476151  941476 type.go:168] "Request Body" body=""
	I1213 10:31:39.476225  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:39.476508  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:39.976053  941476 type.go:168] "Request Body" body=""
	I1213 10:31:39.976130  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:39.976448  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:40.476042  941476 type.go:168] "Request Body" body=""
	I1213 10:31:40.476119  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:40.476446  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:40.976091  941476 type.go:168] "Request Body" body=""
	I1213 10:31:40.976170  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:40.976430  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:41.476048  941476 type.go:168] "Request Body" body=""
	I1213 10:31:41.476125  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:41.476626  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:41.476675  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:41.976630  941476 type.go:168] "Request Body" body=""
	I1213 10:31:41.976743  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:41.977553  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:42.475972  941476 type.go:168] "Request Body" body=""
	I1213 10:31:42.476061  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:42.476364  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:42.976014  941476 type.go:168] "Request Body" body=""
	I1213 10:31:42.976100  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:42.976440  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:43.476024  941476 type.go:168] "Request Body" body=""
	I1213 10:31:43.476107  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:43.476429  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:43.975985  941476 type.go:168] "Request Body" body=""
	I1213 10:31:43.976054  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:43.976344  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:43.976397  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:44.476016  941476 type.go:168] "Request Body" body=""
	I1213 10:31:44.476093  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:44.476411  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:44.976030  941476 type.go:168] "Request Body" body=""
	I1213 10:31:44.976151  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:44.976503  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:45.476045  941476 type.go:168] "Request Body" body=""
	I1213 10:31:45.476120  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:45.476386  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:45.976018  941476 type.go:168] "Request Body" body=""
	I1213 10:31:45.976092  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:45.976393  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:45.976440  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:46.476013  941476 type.go:168] "Request Body" body=""
	I1213 10:31:46.476094  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:46.476429  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:46.975976  941476 type.go:168] "Request Body" body=""
	I1213 10:31:46.976048  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:46.976402  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:47.476027  941476 type.go:168] "Request Body" body=""
	I1213 10:31:47.476109  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:47.476419  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:47.976020  941476 type.go:168] "Request Body" body=""
	I1213 10:31:47.976095  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:47.976422  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:47.976480  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:48.476004  941476 type.go:168] "Request Body" body=""
	I1213 10:31:48.476083  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:48.476391  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:48.976026  941476 type.go:168] "Request Body" body=""
	I1213 10:31:48.976109  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:48.976439  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:49.476029  941476 type.go:168] "Request Body" body=""
	I1213 10:31:49.476115  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:49.476450  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:49.976130  941476 type.go:168] "Request Body" body=""
	I1213 10:31:49.976202  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:49.976477  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:49.976519  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:50.476169  941476 type.go:168] "Request Body" body=""
	I1213 10:31:50.476246  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:50.476586  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:50.976287  941476 type.go:168] "Request Body" body=""
	I1213 10:31:50.976360  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:50.976729  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:51.476495  941476 type.go:168] "Request Body" body=""
	I1213 10:31:51.476574  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:51.476839  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:51.976777  941476 type.go:168] "Request Body" body=""
	I1213 10:31:51.976892  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:51.977255  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:51.977312  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:52.476019  941476 type.go:168] "Request Body" body=""
	I1213 10:31:52.476115  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:52.476505  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:52.975986  941476 type.go:168] "Request Body" body=""
	I1213 10:31:52.976066  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:52.976377  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:53.476003  941476 type.go:168] "Request Body" body=""
	I1213 10:31:53.476081  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:53.476419  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:53.976122  941476 type.go:168] "Request Body" body=""
	I1213 10:31:53.976204  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:53.976539  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:54.476283  941476 type.go:168] "Request Body" body=""
	I1213 10:31:54.476358  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:54.476609  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:54.476652  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:54.976007  941476 type.go:168] "Request Body" body=""
	I1213 10:31:54.976081  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:54.976403  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:55.476020  941476 type.go:168] "Request Body" body=""
	I1213 10:31:55.476101  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:55.476465  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:55.976175  941476 type.go:168] "Request Body" body=""
	I1213 10:31:55.976246  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:55.976517  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:56.476006  941476 type.go:168] "Request Body" body=""
	I1213 10:31:56.476086  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:56.476452  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:56.976011  941476 type.go:168] "Request Body" body=""
	I1213 10:31:56.976090  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:56.976453  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:56.976513  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:57.476145  941476 type.go:168] "Request Body" body=""
	I1213 10:31:57.476215  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:57.476478  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:57.976009  941476 type.go:168] "Request Body" body=""
	I1213 10:31:57.976085  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:57.976451  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:58.476036  941476 type.go:168] "Request Body" body=""
	I1213 10:31:58.476114  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:58.476420  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:58.976112  941476 type.go:168] "Request Body" body=""
	I1213 10:31:58.976184  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:58.976451  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:59.476021  941476 type.go:168] "Request Body" body=""
	I1213 10:31:59.476097  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:59.476444  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:59.476501  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:59.976030  941476 type.go:168] "Request Body" body=""
	I1213 10:31:59.976103  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:59.976445  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:00.476019  941476 type.go:168] "Request Body" body=""
	I1213 10:32:00.476100  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:00.476422  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:00.976042  941476 type.go:168] "Request Body" body=""
	I1213 10:32:00.976122  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:00.976457  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:01.476038  941476 type.go:168] "Request Body" body=""
	I1213 10:32:01.476135  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:01.476461  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:01.476525  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:01.976433  941476 type.go:168] "Request Body" body=""
	I1213 10:32:01.976500  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:01.976760  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:02.476646  941476 type.go:168] "Request Body" body=""
	I1213 10:32:02.476736  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:02.477125  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:02.976957  941476 type.go:168] "Request Body" body=""
	I1213 10:32:02.977037  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:02.977386  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:03.476003  941476 type.go:168] "Request Body" body=""
	I1213 10:32:03.476067  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:03.476327  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:03.976021  941476 type.go:168] "Request Body" body=""
	I1213 10:32:03.976099  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:03.976425  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:03.976487  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:04.476032  941476 type.go:168] "Request Body" body=""
	I1213 10:32:04.476118  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:04.476477  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:04.976189  941476 type.go:168] "Request Body" body=""
	I1213 10:32:04.976259  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:04.976524  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:05.476054  941476 type.go:168] "Request Body" body=""
	I1213 10:32:05.476131  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:05.476494  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:05.976279  941476 type.go:168] "Request Body" body=""
	I1213 10:32:05.976358  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:05.976703  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:05.976759  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:06.476417  941476 type.go:168] "Request Body" body=""
	I1213 10:32:06.476497  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:06.476760  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:06.976645  941476 type.go:168] "Request Body" body=""
	I1213 10:32:06.976724  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:06.977077  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:07.476899  941476 type.go:168] "Request Body" body=""
	I1213 10:32:07.476981  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:07.477364  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:07.976070  941476 type.go:168] "Request Body" body=""
	I1213 10:32:07.976148  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:07.976442  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:08.476070  941476 type.go:168] "Request Body" body=""
	I1213 10:32:08.476152  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:08.476469  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:08.476525  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:08.976049  941476 type.go:168] "Request Body" body=""
	I1213 10:32:08.976129  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:08.976453  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:09.475983  941476 type.go:168] "Request Body" body=""
	I1213 10:32:09.476056  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:09.476367  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:09.976056  941476 type.go:168] "Request Body" body=""
	I1213 10:32:09.976139  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:09.976488  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:10.476201  941476 type.go:168] "Request Body" body=""
	I1213 10:32:10.476278  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:10.476604  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:10.476662  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:10.975985  941476 type.go:168] "Request Body" body=""
	I1213 10:32:10.976066  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:10.976386  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:11.476030  941476 type.go:168] "Request Body" body=""
	I1213 10:32:11.476107  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:11.476435  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:11.976014  941476 type.go:168] "Request Body" body=""
	I1213 10:32:11.976091  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:11.976414  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:12.475989  941476 type.go:168] "Request Body" body=""
	I1213 10:32:12.476059  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:12.476328  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:12.976032  941476 type.go:168] "Request Body" body=""
	I1213 10:32:12.976113  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:12.976433  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:12.976487  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:13.476035  941476 type.go:168] "Request Body" body=""
	I1213 10:32:13.476108  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:13.476457  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:13.976139  941476 type.go:168] "Request Body" body=""
	I1213 10:32:13.976217  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:13.976477  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:14.476065  941476 type.go:168] "Request Body" body=""
	I1213 10:32:14.476149  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:14.476488  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:14.976200  941476 type.go:168] "Request Body" body=""
	I1213 10:32:14.976280  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:14.976630  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:14.976691  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:15.476331  941476 type.go:168] "Request Body" body=""
	I1213 10:32:15.476407  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:15.476718  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:15.976843  941476 type.go:168] "Request Body" body=""
	I1213 10:32:15.976916  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:15.977265  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:16.476944  941476 type.go:168] "Request Body" body=""
	I1213 10:32:16.477018  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:16.477394  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:16.976098  941476 type.go:168] "Request Body" body=""
	I1213 10:32:16.976173  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:16.976437  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:17.476036  941476 type.go:168] "Request Body" body=""
	I1213 10:32:17.476113  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:17.476455  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:17.476515  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:17.976191  941476 type.go:168] "Request Body" body=""
	I1213 10:32:17.976268  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:17.976582  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:18.475997  941476 type.go:168] "Request Body" body=""
	I1213 10:32:18.476079  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:18.476340  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:18.976113  941476 type.go:168] "Request Body" body=""
	I1213 10:32:18.976206  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:18.976563  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:19.476049  941476 type.go:168] "Request Body" body=""
	I1213 10:32:19.476129  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:19.476456  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:19.976098  941476 type.go:168] "Request Body" body=""
	I1213 10:32:19.976166  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:19.976467  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:19.976522  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:20.476043  941476 type.go:168] "Request Body" body=""
	I1213 10:32:20.476118  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:20.476441  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:20.976163  941476 type.go:168] "Request Body" body=""
	I1213 10:32:20.976242  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:20.976531  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:21.475975  941476 type.go:168] "Request Body" body=""
	I1213 10:32:21.476045  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:21.476354  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:21.976036  941476 type.go:168] "Request Body" body=""
	I1213 10:32:21.976111  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:21.976471  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:22.476157  941476 type.go:168] "Request Body" body=""
	I1213 10:32:22.476236  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:22.476595  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:22.476649  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:22.975989  941476 type.go:168] "Request Body" body=""
	I1213 10:32:22.976063  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:22.976350  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:23.476043  941476 type.go:168] "Request Body" body=""
	I1213 10:32:23.476117  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:23.476465  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:23.976206  941476 type.go:168] "Request Body" body=""
	I1213 10:32:23.976283  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:23.976637  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:24.475985  941476 type.go:168] "Request Body" body=""
	I1213 10:32:24.476065  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:24.476346  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:24.976054  941476 type.go:168] "Request Body" body=""
	I1213 10:32:24.976136  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:24.976464  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:24.976520  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:25.476178  941476 type.go:168] "Request Body" body=""
	I1213 10:32:25.476258  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:25.476612  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:25.976593  941476 type.go:168] "Request Body" body=""
	I1213 10:32:25.976662  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:25.976936  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:26.476747  941476 type.go:168] "Request Body" body=""
	I1213 10:32:26.476821  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:26.477090  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:26.975948  941476 type.go:168] "Request Body" body=""
	I1213 10:32:26.976024  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:26.976402  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:27.476084  941476 type.go:168] "Request Body" body=""
	I1213 10:32:27.476158  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:27.476433  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:27.476474  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:27.976004  941476 type.go:168] "Request Body" body=""
	I1213 10:32:27.976087  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:27.976410  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:28.476155  941476 type.go:168] "Request Body" body=""
	I1213 10:32:28.476244  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:28.476588  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:28.976255  941476 type.go:168] "Request Body" body=""
	I1213 10:32:28.976331  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:28.976594  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:29.476028  941476 type.go:168] "Request Body" body=""
	I1213 10:32:29.476107  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:29.476476  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:29.476531  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:29.976055  941476 type.go:168] "Request Body" body=""
	I1213 10:32:29.976132  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:29.976460  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:30.475985  941476 type.go:168] "Request Body" body=""
	I1213 10:32:30.476059  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:30.476378  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:30.976032  941476 type.go:168] "Request Body" body=""
	I1213 10:32:30.976108  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:30.976436  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:31.476042  941476 type.go:168] "Request Body" body=""
	I1213 10:32:31.476119  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:31.476446  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:31.976398  941476 type.go:168] "Request Body" body=""
	I1213 10:32:31.976466  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:31.976719  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:31.976758  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:32.476588  941476 type.go:168] "Request Body" body=""
	I1213 10:32:32.476670  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:32.477064  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:32.976842  941476 type.go:168] "Request Body" body=""
	I1213 10:32:32.976917  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:32.977255  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:33.475960  941476 type.go:168] "Request Body" body=""
	I1213 10:32:33.476032  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:33.476294  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:33.975991  941476 type.go:168] "Request Body" body=""
	I1213 10:32:33.976070  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:33.976448  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:34.476153  941476 type.go:168] "Request Body" body=""
	I1213 10:32:34.476241  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:34.476568  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:34.476624  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:34.976261  941476 type.go:168] "Request Body" body=""
	I1213 10:32:34.976336  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:34.976618  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:35.476037  941476 type.go:168] "Request Body" body=""
	I1213 10:32:35.476116  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:35.476453  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:35.976396  941476 type.go:168] "Request Body" body=""
	I1213 10:32:35.976472  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:35.976804  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:36.476554  941476 type.go:168] "Request Body" body=""
	I1213 10:32:36.476624  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:36.476895  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:36.476937  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:36.976884  941476 type.go:168] "Request Body" body=""
	I1213 10:32:36.976958  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:36.977293  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:37.476031  941476 type.go:168] "Request Body" body=""
	I1213 10:32:37.476114  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:37.476465  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:37.976004  941476 type.go:168] "Request Body" body=""
	I1213 10:32:37.976074  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:37.976340  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:38.476062  941476 type.go:168] "Request Body" body=""
	I1213 10:32:38.476138  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:38.476437  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:38.976005  941476 type.go:168] "Request Body" body=""
	I1213 10:32:38.976078  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:38.976403  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:38.976454  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:39.475982  941476 type.go:168] "Request Body" body=""
	I1213 10:32:39.476059  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:39.476428  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:39.976002  941476 type.go:168] "Request Body" body=""
	I1213 10:32:39.976082  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:39.976414  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:40.476038  941476 type.go:168] "Request Body" body=""
	I1213 10:32:40.476118  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:40.476462  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:40.976166  941476 type.go:168] "Request Body" body=""
	I1213 10:32:40.976245  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:40.976502  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:40.976544  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:41.476000  941476 type.go:168] "Request Body" body=""
	I1213 10:32:41.476073  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:41.476433  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:41.976208  941476 type.go:168] "Request Body" body=""
	I1213 10:32:41.976289  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:41.976643  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:42.475983  941476 type.go:168] "Request Body" body=""
	I1213 10:32:42.476059  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:42.476353  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:42.976069  941476 type.go:168] "Request Body" body=""
	I1213 10:32:42.976137  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:42.976430  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:43.476306  941476 type.go:168] "Request Body" body=""
	I1213 10:32:43.476396  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:43.476750  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:43.476809  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:43.976720  941476 type.go:168] "Request Body" body=""
	I1213 10:32:43.976798  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:43.977089  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:44.477009  941476 type.go:168] "Request Body" body=""
	I1213 10:32:44.477085  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:44.477386  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:44.976767  941476 type.go:168] "Request Body" body=""
	I1213 10:32:44.976848  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:44.977176  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:45.475924  941476 type.go:168] "Request Body" body=""
	I1213 10:32:45.476036  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:45.476370  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:45.975913  941476 type.go:168] "Request Body" body=""
	I1213 10:32:45.975984  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:45.976317  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:45.976387  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:46.476025  941476 type.go:168] "Request Body" body=""
	I1213 10:32:46.476110  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:46.476450  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:46.975972  941476 type.go:168] "Request Body" body=""
	I1213 10:32:46.976040  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:46.976351  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:47.476004  941476 type.go:168] "Request Body" body=""
	I1213 10:32:47.476136  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:47.476459  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:47.976017  941476 type.go:168] "Request Body" body=""
	I1213 10:32:47.976089  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:47.976421  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:47.976477  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:48.476128  941476 type.go:168] "Request Body" body=""
	I1213 10:32:48.476203  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:48.476459  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:48.976015  941476 type.go:168] "Request Body" body=""
	I1213 10:32:48.976089  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:48.976419  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:49.476032  941476 type.go:168] "Request Body" body=""
	I1213 10:32:49.476106  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:49.476423  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:49.975990  941476 type.go:168] "Request Body" body=""
	I1213 10:32:49.976065  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:49.976312  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:50.476026  941476 type.go:168] "Request Body" body=""
	I1213 10:32:50.476104  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:50.476430  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:50.476486  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:50.976049  941476 type.go:168] "Request Body" body=""
	I1213 10:32:50.976131  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:50.976481  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:51.476188  941476 type.go:168] "Request Body" body=""
	I1213 10:32:51.476259  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:51.476529  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:51.976428  941476 type.go:168] "Request Body" body=""
	I1213 10:32:51.976507  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:51.976844  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:52.476643  941476 type.go:168] "Request Body" body=""
	I1213 10:32:52.476721  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:52.477067  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:52.477124  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:52.976867  941476 type.go:168] "Request Body" body=""
	I1213 10:32:52.976936  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:52.977207  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:53.475946  941476 type.go:168] "Request Body" body=""
	I1213 10:32:53.476027  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:53.476328  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:53.975930  941476 type.go:168] "Request Body" body=""
	I1213 10:32:53.976034  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:53.976391  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:54.475960  941476 type.go:168] "Request Body" body=""
	I1213 10:32:54.476035  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:54.476297  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:54.975999  941476 type.go:168] "Request Body" body=""
	I1213 10:32:54.976070  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:54.976357  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:54.976407  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:55.476011  941476 type.go:168] "Request Body" body=""
	I1213 10:32:55.476101  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:55.476377  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:55.976254  941476 type.go:168] "Request Body" body=""
	I1213 10:32:55.976330  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:55.976613  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:56.476032  941476 type.go:168] "Request Body" body=""
	I1213 10:32:56.476109  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:56.476457  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:56.976037  941476 type.go:168] "Request Body" body=""
	I1213 10:32:56.976111  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:56.976434  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:56.976489  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:57.475980  941476 type.go:168] "Request Body" body=""
	I1213 10:32:57.476061  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:57.476382  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:57.976008  941476 type.go:168] "Request Body" body=""
	I1213 10:32:57.976084  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:57.976417  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:58.476035  941476 type.go:168] "Request Body" body=""
	I1213 10:32:58.476116  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:58.476441  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:58.975991  941476 type.go:168] "Request Body" body=""
	I1213 10:32:58.976067  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:58.976351  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:59.476097  941476 type.go:168] "Request Body" body=""
	I1213 10:32:59.476175  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:59.476508  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:59.476569  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:59.976006  941476 type.go:168] "Request Body" body=""
	I1213 10:32:59.976086  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:59.976416  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:00.476102  941476 type.go:168] "Request Body" body=""
	I1213 10:33:00.476181  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:00.476460  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:00.976047  941476 type.go:168] "Request Body" body=""
	I1213 10:33:00.976134  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:00.976487  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:01.476029  941476 type.go:168] "Request Body" body=""
	I1213 10:33:01.476105  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:01.476429  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:01.975971  941476 type.go:168] "Request Body" body=""
	I1213 10:33:01.976042  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:01.976355  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:01.976407  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:02.476023  941476 type.go:168] "Request Body" body=""
	I1213 10:33:02.476094  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:02.476438  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:02.976170  941476 type.go:168] "Request Body" body=""
	I1213 10:33:02.976252  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:02.976630  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:03.476323  941476 type.go:168] "Request Body" body=""
	I1213 10:33:03.476399  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:03.476657  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:03.976052  941476 type.go:168] "Request Body" body=""
	I1213 10:33:03.976134  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:03.976463  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:03.976518  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:04.476187  941476 type.go:168] "Request Body" body=""
	I1213 10:33:04.476262  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:04.476613  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:04.976299  941476 type.go:168] "Request Body" body=""
	I1213 10:33:04.976377  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:04.976641  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:05.476304  941476 type.go:168] "Request Body" body=""
	I1213 10:33:05.476380  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:05.476711  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:05.976815  941476 type.go:168] "Request Body" body=""
	I1213 10:33:05.976895  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:05.977239  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:05.977294  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:06.475975  941476 type.go:168] "Request Body" body=""
	I1213 10:33:06.476047  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:06.476308  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:06.976045  941476 type.go:168] "Request Body" body=""
	I1213 10:33:06.976148  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:06.976516  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:07.476071  941476 type.go:168] "Request Body" body=""
	I1213 10:33:07.476148  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:07.476544  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:07.976078  941476 type.go:168] "Request Body" body=""
	I1213 10:33:07.976149  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:07.976402  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:08.476027  941476 type.go:168] "Request Body" body=""
	I1213 10:33:08.476099  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:08.476433  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:08.476487  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:08.976023  941476 type.go:168] "Request Body" body=""
	I1213 10:33:08.976112  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:08.976462  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:09.476176  941476 type.go:168] "Request Body" body=""
	I1213 10:33:09.476251  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:09.476526  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:09.976025  941476 type.go:168] "Request Body" body=""
	I1213 10:33:09.976104  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:09.976463  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:10.476184  941476 type.go:168] "Request Body" body=""
	I1213 10:33:10.476271  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:10.476609  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:10.476665  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:10.975992  941476 type.go:168] "Request Body" body=""
	I1213 10:33:10.976076  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:10.976358  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:11.476054  941476 type.go:168] "Request Body" body=""
	I1213 10:33:11.476129  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:11.476473  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:11.976030  941476 type.go:168] "Request Body" body=""
	I1213 10:33:11.976106  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:11.976465  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:12.476140  941476 type.go:168] "Request Body" body=""
	I1213 10:33:12.476209  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:12.476469  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:12.976013  941476 type.go:168] "Request Body" body=""
	I1213 10:33:12.976099  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:12.976394  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:12.976444  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:13.476111  941476 type.go:168] "Request Body" body=""
	I1213 10:33:13.476187  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:13.476533  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:13.976215  941476 type.go:168] "Request Body" body=""
	I1213 10:33:13.976284  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:13.976554  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:14.476030  941476 type.go:168] "Request Body" body=""
	I1213 10:33:14.476112  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:14.476450  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:14.976164  941476 type.go:168] "Request Body" body=""
	I1213 10:33:14.976241  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:14.976581  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:14.976644  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:15.475977  941476 type.go:168] "Request Body" body=""
	I1213 10:33:15.476046  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:15.476298  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:15.975945  941476 type.go:168] "Request Body" body=""
	I1213 10:33:15.976032  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:15.976414  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:16.476144  941476 type.go:168] "Request Body" body=""
	I1213 10:33:16.476219  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:16.476559  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:16.976466  941476 type.go:168] "Request Body" body=""
	I1213 10:33:16.976541  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:16.976809  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:16.976860  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:17.476687  941476 type.go:168] "Request Body" body=""
	I1213 10:33:17.476761  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:17.477087  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:17.976932  941476 type.go:168] "Request Body" body=""
	I1213 10:33:17.977005  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:17.977321  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:18.476000  941476 type.go:168] "Request Body" body=""
	I1213 10:33:18.476076  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:18.476392  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:18.976021  941476 type.go:168] "Request Body" body=""
	I1213 10:33:18.976114  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:18.976472  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:19.476006  941476 type.go:168] "Request Body" body=""
	I1213 10:33:19.476090  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:19.476437  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:19.476492  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:19.975984  941476 type.go:168] "Request Body" body=""
	I1213 10:33:19.976061  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:19.976331  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:20.476028  941476 type.go:168] "Request Body" body=""
	I1213 10:33:20.476114  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:20.476446  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:20.976140  941476 type.go:168] "Request Body" body=""
	I1213 10:33:20.976215  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:20.976570  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:21.476259  941476 type.go:168] "Request Body" body=""
	I1213 10:33:21.476335  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:21.476598  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:21.476641  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:21.976641  941476 type.go:168] "Request Body" body=""
	I1213 10:33:21.976721  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:21.977055  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:22.476842  941476 type.go:168] "Request Body" body=""
	I1213 10:33:22.476921  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:22.477263  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:22.975958  941476 type.go:168] "Request Body" body=""
	I1213 10:33:22.976026  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:22.976279  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:23.476028  941476 type.go:168] "Request Body" body=""
	I1213 10:33:23.476115  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:23.476440  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:23.976154  941476 type.go:168] "Request Body" body=""
	I1213 10:33:23.976230  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:23.976599  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:23.976655  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:24.476302  941476 type.go:168] "Request Body" body=""
	I1213 10:33:24.476382  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:24.476643  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:24.976013  941476 type.go:168] "Request Body" body=""
	I1213 10:33:24.976088  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:24.976409  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:25.476125  941476 type.go:168] "Request Body" body=""
	I1213 10:33:25.476201  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:25.476538  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:25.976508  941476 type.go:168] "Request Body" body=""
	I1213 10:33:25.976580  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:25.976838  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:25.976879  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:26.476580  941476 type.go:168] "Request Body" body=""
	I1213 10:33:26.476662  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:26.476989  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:26.975922  941476 type.go:168] "Request Body" body=""
	I1213 10:33:26.976010  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:26.976354  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:27.476098  941476 type.go:168] "Request Body" body=""
	I1213 10:33:27.476184  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:27.476458  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:27.976019  941476 type.go:168] "Request Body" body=""
	I1213 10:33:27.976107  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:27.976466  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:28.476177  941476 type.go:168] "Request Body" body=""
	I1213 10:33:28.476256  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:28.476603  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:28.476659  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:28.975989  941476 type.go:168] "Request Body" body=""
	I1213 10:33:28.976063  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:28.976324  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:29.475991  941476 type.go:168] "Request Body" body=""
	I1213 10:33:29.476066  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:29.476404  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:29.975996  941476 type.go:168] "Request Body" body=""
	I1213 10:33:29.976077  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:29.976425  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:30.476099  941476 type.go:168] "Request Body" body=""
	I1213 10:33:30.476165  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:30.476425  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:30.976057  941476 type.go:168] "Request Body" body=""
	I1213 10:33:30.976139  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:30.976428  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:30.976479  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:31.476028  941476 type.go:168] "Request Body" body=""
	I1213 10:33:31.476102  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:31.476433  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:31.975996  941476 type.go:168] "Request Body" body=""
	I1213 10:33:31.976062  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:31.976317  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:32.476036  941476 type.go:168] "Request Body" body=""
	I1213 10:33:32.476123  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:32.476463  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:32.976156  941476 type.go:168] "Request Body" body=""
	I1213 10:33:32.976239  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:32.976579  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:32.976636  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:33.476103  941476 type.go:168] "Request Body" body=""
	I1213 10:33:33.476175  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:33.476433  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:33.976025  941476 type.go:168] "Request Body" body=""
	I1213 10:33:33.976098  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:33.976421  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:34.476116  941476 type.go:168] "Request Body" body=""
	I1213 10:33:34.476189  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:34.476493  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:34.976055  941476 type.go:168] "Request Body" body=""
	I1213 10:33:34.976123  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:34.976382  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:35.476023  941476 type.go:168] "Request Body" body=""
	I1213 10:33:35.476099  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:35.476443  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:35.476499  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:35.975939  941476 type.go:168] "Request Body" body=""
	I1213 10:33:35.976014  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:35.976367  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:36.475982  941476 type.go:168] "Request Body" body=""
	I1213 10:33:36.476086  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:36.476409  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:36.976046  941476 type.go:168] "Request Body" body=""
	I1213 10:33:36.976117  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:36.976443  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:37.476164  941476 type.go:168] "Request Body" body=""
	I1213 10:33:37.476242  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:37.476524  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:37.476575  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:37.976198  941476 type.go:168] "Request Body" body=""
	I1213 10:33:37.976275  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:37.976533  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:38.476039  941476 type.go:168] "Request Body" body=""
	I1213 10:33:38.476112  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:38.476422  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:38.976114  941476 type.go:168] "Request Body" body=""
	I1213 10:33:38.976199  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:38.976530  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:39.476088  941476 type.go:168] "Request Body" body=""
	I1213 10:33:39.476161  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:39.476422  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:39.976009  941476 type.go:168] "Request Body" body=""
	I1213 10:33:39.976084  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:39.976397  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:39.976449  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:40.476000  941476 type.go:168] "Request Body" body=""
	I1213 10:33:40.476076  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:40.476414  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:40.976095  941476 type.go:168] "Request Body" body=""
	I1213 10:33:40.976167  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:40.976436  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:41.476022  941476 type.go:168] "Request Body" body=""
	I1213 10:33:41.476094  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:41.476397  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:41.976013  941476 type.go:168] "Request Body" body=""
	I1213 10:33:41.976092  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:41.976658  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:41.976706  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:42.475980  941476 type.go:168] "Request Body" body=""
	I1213 10:33:42.476055  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:42.476675  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:42.976377  941476 type.go:168] "Request Body" body=""
	I1213 10:33:42.976455  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:42.976815  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:43.476621  941476 type.go:168] "Request Body" body=""
	I1213 10:33:43.476701  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:43.477037  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:43.976823  941476 type.go:168] "Request Body" body=""
	I1213 10:33:43.976888  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:43.977141  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:43.977181  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:44.476934  941476 type.go:168] "Request Body" body=""
	I1213 10:33:44.477006  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:44.477335  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:44.976009  941476 type.go:168] "Request Body" body=""
	I1213 10:33:44.976100  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:44.976470  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:45.476027  941476 type.go:168] "Request Body" body=""
	I1213 10:33:45.476103  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:45.476385  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:45.976244  941476 type.go:168] "Request Body" body=""
	I1213 10:33:45.976320  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:45.976638  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:46.476051  941476 type.go:168] "Request Body" body=""
	I1213 10:33:46.476134  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:46.476479  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:46.476535  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:46.975987  941476 type.go:168] "Request Body" body=""
	I1213 10:33:46.976061  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:46.976313  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:47.476031  941476 type.go:168] "Request Body" body=""
	I1213 10:33:47.476113  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:47.476457  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:47.976041  941476 type.go:168] "Request Body" body=""
	I1213 10:33:47.976125  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:47.976473  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:48.476166  941476 type.go:168] "Request Body" body=""
	I1213 10:33:48.476241  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:48.476522  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:48.476583  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:48.975991  941476 type.go:168] "Request Body" body=""
	I1213 10:33:48.976075  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:48.976407  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:49.476115  941476 type.go:168] "Request Body" body=""
	I1213 10:33:49.476190  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:49.476513  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:49.975984  941476 type.go:168] "Request Body" body=""
	I1213 10:33:49.976052  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:49.976304  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:50.476021  941476 type.go:168] "Request Body" body=""
	I1213 10:33:50.476105  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:50.476430  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:50.976125  941476 type.go:168] "Request Body" body=""
	I1213 10:33:50.976206  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:50.976556  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:50.976613  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:51.476129  941476 type.go:168] "Request Body" body=""
	I1213 10:33:51.476201  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:51.476471  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:51.976229  941476 type.go:168] "Request Body" body=""
	I1213 10:33:51.976307  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:51.976619  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:52.476357  941476 type.go:168] "Request Body" body=""
	I1213 10:33:52.476455  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:52.476789  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:52.976543  941476 type.go:168] "Request Body" body=""
	I1213 10:33:52.976619  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:52.976876  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:52.976919  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:53.476697  941476 type.go:168] "Request Body" body=""
	I1213 10:33:53.476776  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:53.477117  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:53.976881  941476 type.go:168] "Request Body" body=""
	I1213 10:33:53.976953  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:53.977282  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:54.476946  941476 type.go:168] "Request Body" body=""
	I1213 10:33:54.477041  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:54.477322  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:54.976021  941476 type.go:168] "Request Body" body=""
	I1213 10:33:54.976100  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:54.976426  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:55.476042  941476 type.go:168] "Request Body" body=""
	I1213 10:33:55.476124  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:55.476467  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:55.476548  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:55.976476  941476 type.go:168] "Request Body" body=""
	I1213 10:33:55.976544  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:55.976834  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:56.476663  941476 type.go:168] "Request Body" body=""
	I1213 10:33:56.476741  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:56.477071  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:56.975949  941476 type.go:168] "Request Body" body=""
	I1213 10:33:56.976040  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:56.976420  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:57.475988  941476 type.go:168] "Request Body" body=""
	I1213 10:33:57.476057  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:57.476315  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:57.976051  941476 type.go:168] "Request Body" body=""
	I1213 10:33:57.976129  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:57.976419  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:57.976467  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:58.476120  941476 type.go:168] "Request Body" body=""
	I1213 10:33:58.476204  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:58.476550  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:58.976099  941476 type.go:168] "Request Body" body=""
	I1213 10:33:58.976165  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:58.976418  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:59.476030  941476 type.go:168] "Request Body" body=""
	I1213 10:33:59.476110  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:59.476450  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:59.976134  941476 type.go:168] "Request Body" body=""
	I1213 10:33:59.976218  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:59.976654  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:59.976717  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:00.476365  941476 type.go:168] "Request Body" body=""
	I1213 10:34:00.476441  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:00.476723  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:00.976554  941476 type.go:168] "Request Body" body=""
	I1213 10:34:00.976626  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:00.976899  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:01.476692  941476 type.go:168] "Request Body" body=""
	I1213 10:34:01.476765  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:01.477095  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:01.976838  941476 type.go:168] "Request Body" body=""
	I1213 10:34:01.976916  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:01.977190  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:01.977235  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:02.475885  941476 type.go:168] "Request Body" body=""
	I1213 10:34:02.475972  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:02.476308  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:02.976032  941476 type.go:168] "Request Body" body=""
	I1213 10:34:02.976106  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:02.976439  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:03.476117  941476 type.go:168] "Request Body" body=""
	I1213 10:34:03.476185  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:03.476511  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:03.976078  941476 type.go:168] "Request Body" body=""
	I1213 10:34:03.976164  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:03.976510  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:04.476128  941476 type.go:168] "Request Body" body=""
	I1213 10:34:04.476208  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:04.476533  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:04.476591  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:04.975971  941476 type.go:168] "Request Body" body=""
	I1213 10:34:04.976047  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:04.976363  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:05.476003  941476 type.go:168] "Request Body" body=""
	I1213 10:34:05.476075  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:05.476405  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:05.976170  941476 type.go:168] "Request Body" body=""
	I1213 10:34:05.976243  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:05.976545  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:06.476105  941476 type.go:168] "Request Body" body=""
	I1213 10:34:06.476180  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:06.476517  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:06.976527  941476 type.go:168] "Request Body" body=""
	I1213 10:34:06.976615  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:06.976986  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:06.977056  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:07.476804  941476 type.go:168] "Request Body" body=""
	I1213 10:34:07.476894  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:07.477246  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:07.975926  941476 type.go:168] "Request Body" body=""
	I1213 10:34:07.975997  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:07.976254  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:08.476006  941476 type.go:168] "Request Body" body=""
	I1213 10:34:08.476102  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:08.476453  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:08.976173  941476 type.go:168] "Request Body" body=""
	I1213 10:34:08.976254  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:08.976538  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:09.476199  941476 type.go:168] "Request Body" body=""
	I1213 10:34:09.476277  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:09.476604  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:09.476655  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:09.976021  941476 type.go:168] "Request Body" body=""
	I1213 10:34:09.976097  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:09.976401  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:10.476158  941476 type.go:168] "Request Body" body=""
	I1213 10:34:10.476243  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:10.476583  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:10.975974  941476 type.go:168] "Request Body" body=""
	I1213 10:34:10.976050  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:10.976361  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:11.476040  941476 type.go:168] "Request Body" body=""
	I1213 10:34:11.476133  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:11.476485  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:11.976462  941476 type.go:168] "Request Body" body=""
	I1213 10:34:11.976535  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:11.976826  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:11.976874  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:12.476533  941476 type.go:168] "Request Body" body=""
	I1213 10:34:12.476615  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:12.476871  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:12.976701  941476 type.go:168] "Request Body" body=""
	I1213 10:34:12.976782  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:12.977103  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:13.476950  941476 type.go:168] "Request Body" body=""
	I1213 10:34:13.477040  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:13.477394  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:13.976070  941476 type.go:168] "Request Body" body=""
	I1213 10:34:13.976154  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:13.976430  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:14.476035  941476 type.go:168] "Request Body" body=""
	I1213 10:34:14.476112  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:14.476448  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:14.476506  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:14.976192  941476 type.go:168] "Request Body" body=""
	I1213 10:34:14.976290  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:14.976612  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:15.475988  941476 type.go:168] "Request Body" body=""
	I1213 10:34:15.476090  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:15.476371  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:15.975973  941476 type.go:168] "Request Body" body=""
	I1213 10:34:15.976053  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:15.976336  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:16.476054  941476 type.go:168] "Request Body" body=""
	I1213 10:34:16.476134  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:16.476457  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:16.976232  941476 type.go:168] "Request Body" body=""
	I1213 10:34:16.976307  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:16.976573  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:16.976615  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:17.476042  941476 type.go:168] "Request Body" body=""
	I1213 10:34:17.476118  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:17.476467  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:17.976182  941476 type.go:168] "Request Body" body=""
	I1213 10:34:17.976258  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:17.976609  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:18.476297  941476 type.go:168] "Request Body" body=""
	I1213 10:34:18.476413  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:18.476678  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:18.976046  941476 type.go:168] "Request Body" body=""
	I1213 10:34:18.976123  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:18.976446  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:19.476033  941476 type.go:168] "Request Body" body=""
	I1213 10:34:19.476117  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:19.476440  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:19.476499  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:19.976045  941476 type.go:168] "Request Body" body=""
	I1213 10:34:19.976129  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:19.976535  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:20.476021  941476 type.go:168] "Request Body" body=""
	I1213 10:34:20.476097  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:20.476428  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:20.976025  941476 type.go:168] "Request Body" body=""
	I1213 10:34:20.976107  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:20.976470  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:21.476149  941476 type.go:168] "Request Body" body=""
	I1213 10:34:21.476232  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:21.476535  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:21.476579  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:21.976488  941476 type.go:168] "Request Body" body=""
	I1213 10:34:21.976565  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:21.976917  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:22.476734  941476 type.go:168] "Request Body" body=""
	I1213 10:34:22.476814  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:22.477160  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:22.976929  941476 type.go:168] "Request Body" body=""
	I1213 10:34:22.977004  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:22.977264  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:23.475962  941476 type.go:168] "Request Body" body=""
	I1213 10:34:23.476043  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:23.476394  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:23.975992  941476 type.go:168] "Request Body" body=""
	I1213 10:34:23.976073  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:23.976409  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:23.976471  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:24.475994  941476 type.go:168] "Request Body" body=""
	I1213 10:34:24.476067  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:24.476343  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:24.976030  941476 type.go:168] "Request Body" body=""
	I1213 10:34:24.976113  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:24.976425  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:25.476120  941476 type.go:168] "Request Body" body=""
	I1213 10:34:25.476209  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:25.476597  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:25.976332  941476 type.go:168] "Request Body" body=""
	I1213 10:34:25.976407  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:25.976654  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:25.976698  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:26.476026  941476 type.go:168] "Request Body" body=""
	I1213 10:34:26.476112  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:26.476445  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:26.976015  941476 type.go:168] "Request Body" body=""
	I1213 10:34:26.976105  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:26.976489  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:27.476210  941476 type.go:168] "Request Body" body=""
	I1213 10:34:27.476283  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:27.476557  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:27.976225  941476 type.go:168] "Request Body" body=""
	I1213 10:34:27.976306  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:27.976615  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:28.476015  941476 type.go:168] "Request Body" body=""
	I1213 10:34:28.476091  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:28.476427  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:28.476486  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:28.976015  941476 type.go:168] "Request Body" body=""
	I1213 10:34:28.976082  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:28.976344  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:29.476028  941476 type.go:168] "Request Body" body=""
	I1213 10:34:29.476111  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:29.476510  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:29.976204  941476 type.go:168] "Request Body" body=""
	I1213 10:34:29.976284  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:29.976620  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:30.476184  941476 type.go:168] "Request Body" body=""
	I1213 10:34:30.476253  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:30.476523  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:30.476567  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:30.976028  941476 type.go:168] "Request Body" body=""
	I1213 10:34:30.976107  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:30.976466  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:31.476057  941476 type.go:168] "Request Body" body=""
	I1213 10:34:31.476134  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:31.476442  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:31.976251  941476 type.go:168] "Request Body" body=""
	I1213 10:34:31.976330  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:31.976592  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:32.476291  941476 type.go:168] "Request Body" body=""
	I1213 10:34:32.476378  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:32.476726  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:32.476797  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:32.976040  941476 type.go:168] "Request Body" body=""
	I1213 10:34:32.976118  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:32.976497  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:33.476816  941476 type.go:168] "Request Body" body=""
	I1213 10:34:33.476896  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:33.477256  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:33.975966  941476 type.go:168] "Request Body" body=""
	I1213 10:34:33.976050  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:33.976402  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:34.476115  941476 type.go:168] "Request Body" body=""
	I1213 10:34:34.476192  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:34.476542  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:34.976227  941476 type.go:168] "Request Body" body=""
	I1213 10:34:34.976305  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:34.976571  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:34.976613  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:35.476273  941476 type.go:168] "Request Body" body=""
	I1213 10:34:35.476350  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:35.476744  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:35.976574  941476 type.go:168] "Request Body" body=""
	I1213 10:34:35.976660  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:35.976987  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:36.476793  941476 type.go:168] "Request Body" body=""
	I1213 10:34:36.476879  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:36.477161  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:36.976010  941476 type.go:168] "Request Body" body=""
	I1213 10:34:36.976112  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:36.976494  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:37.476223  941476 type.go:168] "Request Body" body=""
	I1213 10:34:37.476305  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:37.476698  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:37.476756  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:37.976397  941476 type.go:168] "Request Body" body=""
	I1213 10:34:37.976468  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:37.976743  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:38.476018  941476 type.go:168] "Request Body" body=""
	I1213 10:34:38.476101  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:38.476460  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:38.976171  941476 type.go:168] "Request Body" body=""
	I1213 10:34:38.976253  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:38.976575  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:39.475986  941476 type.go:168] "Request Body" body=""
	I1213 10:34:39.476060  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:39.476387  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:39.976027  941476 type.go:168] "Request Body" body=""
	I1213 10:34:39.976100  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:39.976461  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:39.976525  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:40.476055  941476 type.go:168] "Request Body" body=""
	I1213 10:34:40.476137  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:40.476513  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:40.975990  941476 type.go:168] "Request Body" body=""
	I1213 10:34:40.976061  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:40.976330  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:41.476024  941476 type.go:168] "Request Body" body=""
	I1213 10:34:41.476103  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:41.476438  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:41.976017  941476 type.go:168] "Request Body" body=""
	I1213 10:34:41.976103  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:41.976445  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:42.476145  941476 type.go:168] "Request Body" body=""
	I1213 10:34:42.476214  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:42.476486  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:42.476531  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:42.976037  941476 type.go:168] "Request Body" body=""
	I1213 10:34:42.976110  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:42.976445  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:43.476157  941476 type.go:168] "Request Body" body=""
	I1213 10:34:43.476237  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:43.476565  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:43.975999  941476 type.go:168] "Request Body" body=""
	I1213 10:34:43.976386  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:43.976856  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:44.476021  941476 type.go:168] "Request Body" body=""
	I1213 10:34:44.476103  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:44.476481  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:44.476557  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:44.976279  941476 type.go:168] "Request Body" body=""
	I1213 10:34:44.976368  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:44.976729  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:45.476416  941476 type.go:168] "Request Body" body=""
	I1213 10:34:45.476491  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:45.476765  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:45.976781  941476 type.go:168] "Request Body" body=""
	I1213 10:34:45.976855  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:45.977208  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:46.475938  941476 type.go:168] "Request Body" body=""
	I1213 10:34:46.476018  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:46.476395  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:46.976009  941476 type.go:168] "Request Body" body=""
	I1213 10:34:46.976076  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:46.976328  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:46.976368  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:47.475981  941476 type.go:168] "Request Body" body=""
	I1213 10:34:47.476053  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:47.476668  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:47.976228  941476 type.go:168] "Request Body" body=""
	I1213 10:34:47.976301  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:47.976633  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:48.476320  941476 type.go:168] "Request Body" body=""
	I1213 10:34:48.476388  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:48.476671  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:48.976029  941476 type.go:168] "Request Body" body=""
	I1213 10:34:48.976104  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:48.976426  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:48.976483  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:49.476176  941476 type.go:168] "Request Body" body=""
	I1213 10:34:49.476256  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:49.476626  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:49.976322  941476 type.go:168] "Request Body" body=""
	I1213 10:34:49.976392  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:49.976656  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:50.476004  941476 type.go:168] "Request Body" body=""
	I1213 10:34:50.476079  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:50.476438  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:50.976173  941476 type.go:168] "Request Body" body=""
	I1213 10:34:50.976252  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:50.976562  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:50.976609  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:51.475985  941476 type.go:168] "Request Body" body=""
	I1213 10:34:51.476058  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:51.476380  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:51.976214  941476 type.go:168] "Request Body" body=""
	I1213 10:34:51.976292  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:51.976634  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:52.476361  941476 type.go:168] "Request Body" body=""
	I1213 10:34:52.476438  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:52.476777  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:52.976550  941476 type.go:168] "Request Body" body=""
	I1213 10:34:52.976620  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:52.976884  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:52.976928  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:53.476704  941476 type.go:168] "Request Body" body=""
	I1213 10:34:53.476789  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:53.477137  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:53.976929  941476 type.go:168] "Request Body" body=""
	I1213 10:34:53.977004  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:53.977333  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:54.476034  941476 type.go:168] "Request Body" body=""
	I1213 10:34:54.476106  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:54.476377  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:54.976023  941476 type.go:168] "Request Body" body=""
	I1213 10:34:54.976105  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:54.976454  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:55.476046  941476 type.go:168] "Request Body" body=""
	I1213 10:34:55.476127  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:55.476479  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:55.476535  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:55.976212  941476 type.go:168] "Request Body" body=""
	I1213 10:34:55.976283  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:55.976540  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:56.476033  941476 type.go:168] "Request Body" body=""
	I1213 10:34:56.476112  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:56.476472  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:56.976530  941476 type.go:168] "Request Body" body=""
	I1213 10:34:56.976612  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:56.977004  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:57.476807  941476 type.go:168] "Request Body" body=""
	I1213 10:34:57.476890  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:57.477154  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:57.477196  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:57.977018  941476 type.go:168] "Request Body" body=""
	I1213 10:34:57.977109  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:57.977446  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:58.476146  941476 type.go:168] "Request Body" body=""
	I1213 10:34:58.476225  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:58.476550  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:58.976270  941476 type.go:168] "Request Body" body=""
	I1213 10:34:58.976346  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:58.976611  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:59.476051  941476 type.go:168] "Request Body" body=""
	I1213 10:34:59.476143  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:59.476548  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:59.976128  941476 type.go:168] "Request Body" body=""
	I1213 10:34:59.976213  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:59.976516  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:59.976563  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:00.476032  941476 type.go:168] "Request Body" body=""
	I1213 10:35:00.476107  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:00.476542  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:00.976317  941476 type.go:168] "Request Body" body=""
	I1213 10:35:00.976411  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:00.976761  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:01.476609  941476 type.go:168] "Request Body" body=""
	I1213 10:35:01.476689  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:01.477045  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:01.976793  941476 type.go:168] "Request Body" body=""
	I1213 10:35:01.976872  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:01.977145  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:01.977189  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:02.476982  941476 type.go:168] "Request Body" body=""
	I1213 10:35:02.477061  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:02.477408  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:02.976099  941476 type.go:168] "Request Body" body=""
	I1213 10:35:02.976178  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:02.976550  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:03.476237  941476 type.go:168] "Request Body" body=""
	I1213 10:35:03.476319  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:03.476595  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:03.976292  941476 type.go:168] "Request Body" body=""
	I1213 10:35:03.976381  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:03.976725  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:04.476528  941476 type.go:168] "Request Body" body=""
	I1213 10:35:04.476603  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:04.476926  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:04.476983  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:04.976700  941476 type.go:168] "Request Body" body=""
	I1213 10:35:04.976771  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:04.977027  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:05.476835  941476 type.go:168] "Request Body" body=""
	I1213 10:35:05.476914  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:05.477258  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:05.976201  941476 type.go:168] "Request Body" body=""
	I1213 10:35:05.976279  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:05.976630  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:06.476362  941476 type.go:168] "Request Body" body=""
	I1213 10:35:06.476440  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:06.476705  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:06.976610  941476 type.go:168] "Request Body" body=""
	I1213 10:35:06.976688  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:06.977052  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:06.977113  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:07.476898  941476 type.go:168] "Request Body" body=""
	I1213 10:35:07.476978  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:07.477359  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:07.975992  941476 type.go:168] "Request Body" body=""
	I1213 10:35:07.976075  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:07.976399  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:08.476093  941476 type.go:168] "Request Body" body=""
	I1213 10:35:08.476179  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:08.476527  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:08.976239  941476 type.go:168] "Request Body" body=""
	I1213 10:35:08.976318  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:08.976631  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:09.475998  941476 type.go:168] "Request Body" body=""
	I1213 10:35:09.476070  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:09.476334  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:09.476377  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:09.976025  941476 type.go:168] "Request Body" body=""
	I1213 10:35:09.976103  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:09.976446  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:10.476153  941476 type.go:168] "Request Body" body=""
	I1213 10:35:10.476230  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:10.476565  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:10.976284  941476 type.go:168] "Request Body" body=""
	I1213 10:35:10.976359  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:10.976641  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:11.476331  941476 type.go:168] "Request Body" body=""
	I1213 10:35:11.476408  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:11.476754  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:11.476819  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:11.976620  941476 type.go:168] "Request Body" body=""
	I1213 10:35:11.976709  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:11.977042  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:12.476813  941476 type.go:168] "Request Body" body=""
	I1213 10:35:12.476885  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:12.477142  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:12.976929  941476 type.go:168] "Request Body" body=""
	I1213 10:35:12.977022  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:12.977398  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:13.476001  941476 type.go:168] "Request Body" body=""
	I1213 10:35:13.476080  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:13.476431  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:13.976122  941476 type.go:168] "Request Body" body=""
	I1213 10:35:13.976192  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:13.976457  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:13.976500  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:14.475989  941476 type.go:168] "Request Body" body=""
	I1213 10:35:14.476065  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:14.476409  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:14.976135  941476 type.go:168] "Request Body" body=""
	I1213 10:35:14.976241  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:14.976610  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:15.476299  941476 type.go:168] "Request Body" body=""
	I1213 10:35:15.476374  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:15.476636  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:15.976597  941476 type.go:168] "Request Body" body=""
	I1213 10:35:15.976678  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:15.977009  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:15.977062  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:16.476828  941476 type.go:168] "Request Body" body=""
	I1213 10:35:16.476909  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:16.477284  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:16.975983  941476 type.go:168] "Request Body" body=""
	I1213 10:35:16.976057  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:16.976412  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:17.476005  941476 type.go:168] "Request Body" body=""
	I1213 10:35:17.476082  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:17.476426  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:17.976147  941476 type.go:168] "Request Body" body=""
	I1213 10:35:17.976234  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:17.976566  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:18.476096  941476 type.go:168] "Request Body" body=""
	I1213 10:35:18.476172  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:18.476450  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:18.476495  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:18.976034  941476 type.go:168] "Request Body" body=""
	I1213 10:35:18.976113  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:18.976435  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:19.476137  941476 type.go:168] "Request Body" body=""
	I1213 10:35:19.476227  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:19.476564  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:19.976248  941476 type.go:168] "Request Body" body=""
	I1213 10:35:19.976327  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:19.976600  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:20.476028  941476 type.go:168] "Request Body" body=""
	I1213 10:35:20.476115  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:20.476474  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:20.476531  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:20.976196  941476 type.go:168] "Request Body" body=""
	I1213 10:35:20.976277  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:20.976613  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:21.476306  941476 type.go:168] "Request Body" body=""
	I1213 10:35:21.476385  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:21.476650  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:21.976568  941476 type.go:168] "Request Body" body=""
	I1213 10:35:21.976645  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:21.976977  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:22.476793  941476 type.go:168] "Request Body" body=""
	I1213 10:35:22.476870  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:22.477217  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:22.477279  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:22.975966  941476 type.go:168] "Request Body" body=""
	I1213 10:35:22.976040  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:22.976311  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:23.476036  941476 type.go:168] "Request Body" body=""
	I1213 10:35:23.476125  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:23.476480  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:23.976070  941476 type.go:168] "Request Body" body=""
	I1213 10:35:23.976153  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:23.976505  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:24.476197  941476 type.go:168] "Request Body" body=""
	I1213 10:35:24.476265  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:24.476534  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:24.976215  941476 type.go:168] "Request Body" body=""
	I1213 10:35:24.976288  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:24.976630  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:24.976686  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:25.476352  941476 type.go:168] "Request Body" body=""
	I1213 10:35:25.476428  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:25.476773  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:25.976631  941476 type.go:168] "Request Body" body=""
	I1213 10:35:25.976701  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:25.976974  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:26.476849  941476 type.go:168] "Request Body" body=""
	I1213 10:35:26.476924  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:26.477262  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:26.976050  941476 type.go:168] "Request Body" body=""
	I1213 10:35:26.976131  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:26.976463  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:27.475977  941476 type.go:168] "Request Body" body=""
	I1213 10:35:27.476053  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:27.476355  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:27.476414  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:27.975991  941476 type.go:168] "Request Body" body=""
	I1213 10:35:27.976070  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:27.976388  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:28.476128  941476 type.go:168] "Request Body" body=""
	I1213 10:35:28.476210  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:28.476540  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:28.975989  941476 type.go:168] "Request Body" body=""
	I1213 10:35:28.976066  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:28.976327  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:29.476011  941476 type.go:168] "Request Body" body=""
	I1213 10:35:29.476091  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:29.476427  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:29.476488  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:29.976030  941476 type.go:168] "Request Body" body=""
	I1213 10:35:29.976112  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:29.976433  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:30.476132  941476 type.go:168] "Request Body" body=""
	I1213 10:35:30.476208  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:30.476494  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:30.976178  941476 type.go:168] "Request Body" body=""
	I1213 10:35:30.976260  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:30.976576  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:31.476298  941476 type.go:168] "Request Body" body=""
	I1213 10:35:31.476371  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:31.476716  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:31.476774  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:31.976573  941476 type.go:168] "Request Body" body=""
	I1213 10:35:31.976645  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:31.976917  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:32.476713  941476 type.go:168] "Request Body" body=""
	I1213 10:35:32.476790  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:32.477195  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:32.975947  941476 type.go:168] "Request Body" body=""
	I1213 10:35:32.976021  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:32.976319  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:33.476001  941476 type.go:168] "Request Body" body=""
	I1213 10:35:33.476069  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:33.476324  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:33.976034  941476 type.go:168] "Request Body" body=""
	I1213 10:35:33.976113  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:33.976453  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:33.976512  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:34.476185  941476 type.go:168] "Request Body" body=""
	I1213 10:35:34.476263  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:34.476596  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:34.975980  941476 type.go:168] "Request Body" body=""
	I1213 10:35:34.976061  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:34.976361  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:35.476014  941476 type.go:168] "Request Body" body=""
	I1213 10:35:35.476110  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:35.476451  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:35.976934  941476 type.go:168] "Request Body" body=""
	I1213 10:35:35.977011  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:35.977366  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:35.977428  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:36.476062  941476 type.go:168] "Request Body" body=""
	I1213 10:35:36.476139  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:36.476417  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:36.976261  941476 type.go:168] "Request Body" body=""
	I1213 10:35:36.976334  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:36.976678  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:37.476392  941476 type.go:168] "Request Body" body=""
	I1213 10:35:37.476480  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:37.476822  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:37.976607  941476 type.go:168] "Request Body" body=""
	I1213 10:35:37.976691  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:37.976956  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:38.476714  941476 type.go:168] "Request Body" body=""
	I1213 10:35:38.476786  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:38.477099  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:38.477160  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:38.976964  941476 type.go:168] "Request Body" body=""
	I1213 10:35:38.977048  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:38.977472  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:39.476025  941476 type.go:168] "Request Body" body=""
	I1213 10:35:39.476097  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:39.476371  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:39.976024  941476 type.go:168] "Request Body" body=""
	I1213 10:35:39.976107  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:39.976494  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:40.476166  941476 type.go:168] "Request Body" body=""
	I1213 10:35:40.476250  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:40.476607  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:40.975981  941476 type.go:168] "Request Body" body=""
	I1213 10:35:40.976060  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:40.976331  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:40.976379  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:41.476023  941476 type.go:168] "Request Body" body=""
	I1213 10:35:41.476108  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:41.476426  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:41.976057  941476 type.go:168] "Request Body" body=""
	I1213 10:35:41.976134  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:41.976442  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:42.475983  941476 type.go:168] "Request Body" body=""
	I1213 10:35:42.476061  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:42.480099  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1213 10:35:42.976928  941476 type.go:168] "Request Body" body=""
	I1213 10:35:42.977007  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:42.977373  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:42.977438  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:43.476024  941476 type.go:168] "Request Body" body=""
	I1213 10:35:43.476136  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:43.476497  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:43.976064  941476 type.go:168] "Request Body" body=""
	I1213 10:35:43.976139  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:43.976405  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:44.476012  941476 type.go:168] "Request Body" body=""
	I1213 10:35:44.476110  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:44.476457  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:44.976181  941476 type.go:168] "Request Body" body=""
	I1213 10:35:44.976260  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:44.976576  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:45.476263  941476 type.go:168] "Request Body" body=""
	I1213 10:35:45.476338  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:45.476639  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:45.476717  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:45.976693  941476 type.go:168] "Request Body" body=""
	I1213 10:35:45.976776  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:45.977113  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:46.476938  941476 type.go:168] "Request Body" body=""
	I1213 10:35:46.477014  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:46.477384  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:46.975991  941476 type.go:168] "Request Body" body=""
	I1213 10:35:46.976091  941476 node_ready.go:38] duration metric: took 6m0.000294728s for node "functional-200955" to be "Ready" ...
	I1213 10:35:46.979089  941476 out.go:203] 
	W1213 10:35:46.981875  941476 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1213 10:35:46.981899  941476 out.go:285] * 
	W1213 10:35:46.984058  941476 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 10:35:46.987297  941476 out.go:203] 
	
	
	==> CRI-O <==
	Dec 13 10:35:55 functional-200955 crio[5381]: time="2025-12-13T10:35:55.812081647Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=f2f4c3b0-bfe0-432d-92c1-d6428259751f name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:35:56 functional-200955 crio[5381]: time="2025-12-13T10:35:56.879814239Z" level=info msg="Checking image status: minikube-local-cache-test:functional-200955" id=d082875c-eac0-435e-a97d-6a8a8a566f94 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:35:56 functional-200955 crio[5381]: time="2025-12-13T10:35:56.880023267Z" level=info msg="Resolving \"minikube-local-cache-test\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 13 10:35:56 functional-200955 crio[5381]: time="2025-12-13T10:35:56.880082279Z" level=info msg="Image minikube-local-cache-test:functional-200955 not found" id=d082875c-eac0-435e-a97d-6a8a8a566f94 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:35:56 functional-200955 crio[5381]: time="2025-12-13T10:35:56.880184319Z" level=info msg="Neither image nor artfiact minikube-local-cache-test:functional-200955 found" id=d082875c-eac0-435e-a97d-6a8a8a566f94 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:35:56 functional-200955 crio[5381]: time="2025-12-13T10:35:56.906408418Z" level=info msg="Checking image status: docker.io/library/minikube-local-cache-test:functional-200955" id=9d156be8-dad9-4da7-af6e-b617028f9b26 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:35:56 functional-200955 crio[5381]: time="2025-12-13T10:35:56.906580014Z" level=info msg="Image docker.io/library/minikube-local-cache-test:functional-200955 not found" id=9d156be8-dad9-4da7-af6e-b617028f9b26 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:35:56 functional-200955 crio[5381]: time="2025-12-13T10:35:56.906641676Z" level=info msg="Neither image nor artfiact docker.io/library/minikube-local-cache-test:functional-200955 found" id=9d156be8-dad9-4da7-af6e-b617028f9b26 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:35:56 functional-200955 crio[5381]: time="2025-12-13T10:35:56.930593009Z" level=info msg="Checking image status: localhost/library/minikube-local-cache-test:functional-200955" id=6282e5a5-2317-4a94-86bd-2371e00a7b21 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:35:56 functional-200955 crio[5381]: time="2025-12-13T10:35:56.930756965Z" level=info msg="Image localhost/library/minikube-local-cache-test:functional-200955 not found" id=6282e5a5-2317-4a94-86bd-2371e00a7b21 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:35:56 functional-200955 crio[5381]: time="2025-12-13T10:35:56.930815772Z" level=info msg="Neither image nor artfiact localhost/library/minikube-local-cache-test:functional-200955 found" id=6282e5a5-2317-4a94-86bd-2371e00a7b21 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:35:57 functional-200955 crio[5381]: time="2025-12-13T10:35:57.929707147Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=3ad8d8ae-4a91-4542-8f03-6bcf33113c85 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:35:58 functional-200955 crio[5381]: time="2025-12-13T10:35:58.268997012Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=0a5da476-2f30-4d30-9b78-e331345e7aa0 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:35:58 functional-200955 crio[5381]: time="2025-12-13T10:35:58.269167795Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=0a5da476-2f30-4d30-9b78-e331345e7aa0 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:35:58 functional-200955 crio[5381]: time="2025-12-13T10:35:58.269210208Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=0a5da476-2f30-4d30-9b78-e331345e7aa0 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:35:58 functional-200955 crio[5381]: time="2025-12-13T10:35:58.88396171Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=4f1e871a-ac94-4a6d-aac9-b3d3bd7ad6d6 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:35:58 functional-200955 crio[5381]: time="2025-12-13T10:35:58.884104898Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=4f1e871a-ac94-4a6d-aac9-b3d3bd7ad6d6 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:35:58 functional-200955 crio[5381]: time="2025-12-13T10:35:58.884142765Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=4f1e871a-ac94-4a6d-aac9-b3d3bd7ad6d6 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:35:58 functional-200955 crio[5381]: time="2025-12-13T10:35:58.913583895Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=c3d18fbc-cdd5-4bc8-8f2c-c45bb317a75c name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:35:58 functional-200955 crio[5381]: time="2025-12-13T10:35:58.913736159Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=c3d18fbc-cdd5-4bc8-8f2c-c45bb317a75c name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:35:58 functional-200955 crio[5381]: time="2025-12-13T10:35:58.913773763Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=c3d18fbc-cdd5-4bc8-8f2c-c45bb317a75c name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:35:58 functional-200955 crio[5381]: time="2025-12-13T10:35:58.93940333Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=7990bddd-bcf9-42ae-a236-f1f3f0aff19c name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:35:58 functional-200955 crio[5381]: time="2025-12-13T10:35:58.939569756Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=7990bddd-bcf9-42ae-a236-f1f3f0aff19c name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:35:58 functional-200955 crio[5381]: time="2025-12-13T10:35:58.939610627Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=7990bddd-bcf9-42ae-a236-f1f3f0aff19c name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:35:59 functional-200955 crio[5381]: time="2025-12-13T10:35:59.52627618Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=03c3c44e-c2d0-4965-9033-b96398801e9f name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:36:01.465483    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:36:01.466246    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:36:01.467947    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:36:01.468502    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:36:01.469984    9373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec13 10:10] kauditd_printk_skb: 8 callbacks suppressed
	[Dec13 10:11] overlayfs: idmapped layers are currently not supported
	[  +0.076161] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec13 10:17] overlayfs: idmapped layers are currently not supported
	[Dec13 10:18] overlayfs: idmapped layers are currently not supported
	[Dec13 10:35] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 10:36:01 up  5:18,  0 user,  load average: 0.53, 0.39, 0.89
	Linux functional-200955 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 10:35:59 functional-200955 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:35:59 functional-200955 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1153.
	Dec 13 10:35:59 functional-200955 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:35:59 functional-200955 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:35:59 functional-200955 kubelet[9264]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 10:35:59 functional-200955 kubelet[9264]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 10:35:59 functional-200955 kubelet[9264]: E1213 10:35:59.789745    9264 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:35:59 functional-200955 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:35:59 functional-200955 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:36:00 functional-200955 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1154.
	Dec 13 10:36:00 functional-200955 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:36:00 functional-200955 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:36:00 functional-200955 kubelet[9284]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 10:36:00 functional-200955 kubelet[9284]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 10:36:00 functional-200955 kubelet[9284]: E1213 10:36:00.567770    9284 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:36:00 functional-200955 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:36:00 functional-200955 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:36:01 functional-200955 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1155.
	Dec 13 10:36:01 functional-200955 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:36:01 functional-200955 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:36:01 functional-200955 kubelet[9328]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 10:36:01 functional-200955 kubelet[9328]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 10:36:01 functional-200955 kubelet[9328]: E1213 10:36:01.285560    9328 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:36:01 functional-200955 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:36:01 functional-200955 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-200955 -n functional-200955
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-200955 -n functional-200955: exit status 2 (359.544695ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-200955" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (2.82s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (2.47s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-200955 get pods
functional_test.go:756: (dbg) Non-zero exit: out/kubectl --context functional-200955 get pods: exit status 1 (113.994048ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:759: failed to run kubectl directly. args "out/kubectl --context functional-200955 get pods": exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-200955
helpers_test.go:244: (dbg) docker inspect functional-200955:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8d53cd00da87a9082532fc43ffe108cc46c100c4d52d598de1abc5c03d05f0a2",
	        "Created": "2025-12-13T10:21:24.063231347Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 935996,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T10:21:24.120776444Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/8d53cd00da87a9082532fc43ffe108cc46c100c4d52d598de1abc5c03d05f0a2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8d53cd00da87a9082532fc43ffe108cc46c100c4d52d598de1abc5c03d05f0a2/hostname",
	        "HostsPath": "/var/lib/docker/containers/8d53cd00da87a9082532fc43ffe108cc46c100c4d52d598de1abc5c03d05f0a2/hosts",
	        "LogPath": "/var/lib/docker/containers/8d53cd00da87a9082532fc43ffe108cc46c100c4d52d598de1abc5c03d05f0a2/8d53cd00da87a9082532fc43ffe108cc46c100c4d52d598de1abc5c03d05f0a2-json.log",
	        "Name": "/functional-200955",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-200955:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-200955",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8d53cd00da87a9082532fc43ffe108cc46c100c4d52d598de1abc5c03d05f0a2",
	                "LowerDir": "/var/lib/docker/overlay2/88eeb10fcc1b499e31bc1f6077feaba1c1c5b1a510066e3c5f3c7fab1032a7a8-init/diff:/var/lib/docker/overlay2/ae644fe0cc2841f5eea1cee1fab5fa62406b5368ff2c4f1e7ca42815e94a37ad/diff",
	                "MergedDir": "/var/lib/docker/overlay2/88eeb10fcc1b499e31bc1f6077feaba1c1c5b1a510066e3c5f3c7fab1032a7a8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/88eeb10fcc1b499e31bc1f6077feaba1c1c5b1a510066e3c5f3c7fab1032a7a8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/88eeb10fcc1b499e31bc1f6077feaba1c1c5b1a510066e3c5f3c7fab1032a7a8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-200955",
	                "Source": "/var/lib/docker/volumes/functional-200955/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-200955",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-200955",
	                "name.minikube.sigs.k8s.io": "functional-200955",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "766cddaf684c9eda3444b59c94594c94772112ec8d9beb3bf9ab0dee27a031f7",
	            "SandboxKey": "/var/run/docker/netns/766cddaf684c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33523"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33524"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33527"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33525"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33526"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-200955": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "26:41:8f:b5:13:ba",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "cc1684d1fcbfd40cf35af7d1687322fe1e1f6c4d0d51bbc510daab317bab57d4",
	                    "EndpointID": "480d7cd674d03dbe8a8b029c866cc993844939c5b39aa63c9b0d9188a61c29a3",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-200955",
	                        "8d53cd00da87"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-200955 -n functional-200955
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-200955 -n functional-200955: exit status 2 (328.54397ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-200955 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p functional-200955 logs -n 25: (1.020686494s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                       ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-769798 image ls --format json --alsologtostderr                                                                                        │ functional-769798 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │ 13 Dec 25 10:21 UTC │
	│ image          │ functional-769798 image build -t localhost/my-image:functional-769798 testdata/build --alsologtostderr                                            │ functional-769798 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │ 13 Dec 25 10:21 UTC │
	│ image          │ functional-769798 image ls --format table --alsologtostderr                                                                                       │ functional-769798 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │ 13 Dec 25 10:21 UTC │
	│ update-context │ functional-769798 update-context --alsologtostderr -v=2                                                                                           │ functional-769798 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │ 13 Dec 25 10:21 UTC │
	│ update-context │ functional-769798 update-context --alsologtostderr -v=2                                                                                           │ functional-769798 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │ 13 Dec 25 10:21 UTC │
	│ update-context │ functional-769798 update-context --alsologtostderr -v=2                                                                                           │ functional-769798 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │ 13 Dec 25 10:21 UTC │
	│ image          │ functional-769798 image ls                                                                                                                        │ functional-769798 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │ 13 Dec 25 10:21 UTC │
	│ delete         │ -p functional-769798                                                                                                                              │ functional-769798 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │ 13 Dec 25 10:21 UTC │
	│ start          │ -p functional-200955 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │                     │
	│ start          │ -p functional-200955 --alsologtostderr -v=8                                                                                                       │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:29 UTC │                     │
	│ cache          │ functional-200955 cache add registry.k8s.io/pause:3.1                                                                                             │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:35 UTC │ 13 Dec 25 10:35 UTC │
	│ cache          │ functional-200955 cache add registry.k8s.io/pause:3.3                                                                                             │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:35 UTC │ 13 Dec 25 10:35 UTC │
	│ cache          │ functional-200955 cache add registry.k8s.io/pause:latest                                                                                          │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:35 UTC │ 13 Dec 25 10:35 UTC │
	│ cache          │ functional-200955 cache add minikube-local-cache-test:functional-200955                                                                           │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:35 UTC │ 13 Dec 25 10:35 UTC │
	│ cache          │ functional-200955 cache delete minikube-local-cache-test:functional-200955                                                                        │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:35 UTC │ 13 Dec 25 10:35 UTC │
	│ cache          │ delete registry.k8s.io/pause:3.3                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 10:35 UTC │ 13 Dec 25 10:35 UTC │
	│ cache          │ list                                                                                                                                              │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 10:35 UTC │ 13 Dec 25 10:35 UTC │
	│ ssh            │ functional-200955 ssh sudo crictl images                                                                                                          │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:35 UTC │ 13 Dec 25 10:35 UTC │
	│ ssh            │ functional-200955 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:35 UTC │ 13 Dec 25 10:35 UTC │
	│ ssh            │ functional-200955 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:35 UTC │                     │
	│ cache          │ functional-200955 cache reload                                                                                                                    │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:35 UTC │ 13 Dec 25 10:35 UTC │
	│ ssh            │ functional-200955 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:35 UTC │ 13 Dec 25 10:35 UTC │
	│ cache          │ delete registry.k8s.io/pause:3.1                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 10:35 UTC │ 13 Dec 25 10:35 UTC │
	│ cache          │ delete registry.k8s.io/pause:latest                                                                                                               │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 10:35 UTC │ 13 Dec 25 10:35 UTC │
	│ kubectl        │ functional-200955 kubectl -- --context functional-200955 get pods                                                                                 │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:35 UTC │                     │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 10:29:41
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 10:29:41.597851  941476 out.go:360] Setting OutFile to fd 1 ...
	I1213 10:29:41.597968  941476 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:29:41.597980  941476 out.go:374] Setting ErrFile to fd 2...
	I1213 10:29:41.597985  941476 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:29:41.598264  941476 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-904040/.minikube/bin
	I1213 10:29:41.598640  941476 out.go:368] Setting JSON to false
	I1213 10:29:41.599496  941476 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":18731,"bootTime":1765603051,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1213 10:29:41.599570  941476 start.go:143] virtualization:  
	I1213 10:29:41.603284  941476 out.go:179] * [functional-200955] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 10:29:41.606132  941476 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 10:29:41.606240  941476 notify.go:221] Checking for updates...
	I1213 10:29:41.611909  941476 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 10:29:41.614766  941476 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22128-904040/kubeconfig
	I1213 10:29:41.617588  941476 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-904040/.minikube
	I1213 10:29:41.620495  941476 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 10:29:41.623575  941476 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 10:29:41.626951  941476 config.go:182] Loaded profile config "functional-200955": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 10:29:41.627063  941476 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 10:29:41.660528  941476 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 10:29:41.660648  941476 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:29:41.716071  941476 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 10:29:41.706597811 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:29:41.716181  941476 docker.go:319] overlay module found
	I1213 10:29:41.719241  941476 out.go:179] * Using the docker driver based on existing profile
	I1213 10:29:41.721997  941476 start.go:309] selected driver: docker
	I1213 10:29:41.722027  941476 start.go:927] validating driver "docker" against &{Name:functional-200955 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-200955 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:29:41.722127  941476 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 10:29:41.722252  941476 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:29:41.778165  941476 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 10:29:41.768783539 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:29:41.778600  941476 cni.go:84] Creating CNI manager for ""
	I1213 10:29:41.778650  941476 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 10:29:41.778703  941476 start.go:353] cluster config:
	{Name:functional-200955 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-200955 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:29:41.781806  941476 out.go:179] * Starting "functional-200955" primary control-plane node in "functional-200955" cluster
	I1213 10:29:41.784501  941476 cache.go:134] Beginning downloading kic base image for docker with crio
	I1213 10:29:41.787625  941476 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 10:29:41.790577  941476 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 10:29:41.790637  941476 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22128-904040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1213 10:29:41.790650  941476 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 10:29:41.790656  941476 cache.go:65] Caching tarball of preloaded images
	I1213 10:29:41.790739  941476 preload.go:238] Found /home/jenkins/minikube-integration/22128-904040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1213 10:29:41.790750  941476 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1213 10:29:41.790859  941476 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/config.json ...
	I1213 10:29:41.809947  941476 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 10:29:41.809969  941476 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 10:29:41.809989  941476 cache.go:243] Successfully downloaded all kic artifacts
	I1213 10:29:41.810023  941476 start.go:360] acquireMachinesLock for functional-200955: {Name:mkc5e96275d9db4dc69c44a1e3c60b6575a1e73a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 10:29:41.810091  941476 start.go:364] duration metric: took 45.924µs to acquireMachinesLock for "functional-200955"
	I1213 10:29:41.810115  941476 start.go:96] Skipping create...Using existing machine configuration
	I1213 10:29:41.810124  941476 fix.go:54] fixHost starting: 
	I1213 10:29:41.810397  941476 cli_runner.go:164] Run: docker container inspect functional-200955 --format={{.State.Status}}
	I1213 10:29:41.827321  941476 fix.go:112] recreateIfNeeded on functional-200955: state=Running err=<nil>
	W1213 10:29:41.827351  941476 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 10:29:41.830448  941476 out.go:252] * Updating the running docker "functional-200955" container ...
	I1213 10:29:41.830480  941476 machine.go:94] provisionDockerMachine start ...
	I1213 10:29:41.830562  941476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-200955
	I1213 10:29:41.846863  941476 main.go:143] libmachine: Using SSH client type: native
	I1213 10:29:41.847197  941476 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33523 <nil> <nil>}
	I1213 10:29:41.847214  941476 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 10:29:41.996943  941476 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-200955
	
	I1213 10:29:41.996971  941476 ubuntu.go:182] provisioning hostname "functional-200955"
	I1213 10:29:41.997042  941476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-200955
	I1213 10:29:42.018825  941476 main.go:143] libmachine: Using SSH client type: native
	I1213 10:29:42.019169  941476 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33523 <nil> <nil>}
	I1213 10:29:42.019192  941476 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-200955 && echo "functional-200955" | sudo tee /etc/hostname
	I1213 10:29:42.186347  941476 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-200955
	
	I1213 10:29:42.186459  941476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-200955
	I1213 10:29:42.209314  941476 main.go:143] libmachine: Using SSH client type: native
	I1213 10:29:42.209694  941476 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33523 <nil> <nil>}
	I1213 10:29:42.209712  941476 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-200955' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-200955/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-200955' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 10:29:42.370026  941476 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 10:29:42.370125  941476 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22128-904040/.minikube CaCertPath:/home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22128-904040/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22128-904040/.minikube}
	I1213 10:29:42.370174  941476 ubuntu.go:190] setting up certificates
	I1213 10:29:42.370200  941476 provision.go:84] configureAuth start
	I1213 10:29:42.370268  941476 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-200955
	I1213 10:29:42.388638  941476 provision.go:143] copyHostCerts
	I1213 10:29:42.388684  941476 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22128-904040/.minikube/ca.pem
	I1213 10:29:42.388728  941476 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-904040/.minikube/ca.pem, removing ...
	I1213 10:29:42.388739  941476 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-904040/.minikube/ca.pem
	I1213 10:29:42.388819  941476 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22128-904040/.minikube/ca.pem (1082 bytes)
	I1213 10:29:42.388924  941476 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22128-904040/.minikube/cert.pem
	I1213 10:29:42.388947  941476 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-904040/.minikube/cert.pem, removing ...
	I1213 10:29:42.388956  941476 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-904040/.minikube/cert.pem
	I1213 10:29:42.388985  941476 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22128-904040/.minikube/cert.pem (1123 bytes)
	I1213 10:29:42.389034  941476 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22128-904040/.minikube/key.pem
	I1213 10:29:42.389056  941476 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-904040/.minikube/key.pem, removing ...
	I1213 10:29:42.389064  941476 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-904040/.minikube/key.pem
	I1213 10:29:42.389093  941476 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22128-904040/.minikube/key.pem (1675 bytes)
	I1213 10:29:42.389148  941476 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22128-904040/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca-key.pem org=jenkins.functional-200955 san=[127.0.0.1 192.168.49.2 functional-200955 localhost minikube]
	I1213 10:29:42.553052  941476 provision.go:177] copyRemoteCerts
	I1213 10:29:42.553125  941476 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 10:29:42.553174  941476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-200955
	I1213 10:29:42.571937  941476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33523 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/functional-200955/id_rsa Username:docker}
	I1213 10:29:42.681380  941476 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1213 10:29:42.681440  941476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 10:29:42.698297  941476 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-904040/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1213 10:29:42.698381  941476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 10:29:42.715245  941476 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-904040/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1213 10:29:42.715360  941476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 10:29:42.732152  941476 provision.go:87] duration metric: took 361.926272ms to configureAuth
	I1213 10:29:42.732184  941476 ubuntu.go:206] setting minikube options for container-runtime
	I1213 10:29:42.732358  941476 config.go:182] Loaded profile config "functional-200955": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 10:29:42.732458  941476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-200955
	I1213 10:29:42.749290  941476 main.go:143] libmachine: Using SSH client type: native
	I1213 10:29:42.749620  941476 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33523 <nil> <nil>}
	I1213 10:29:42.749643  941476 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 10:29:43.093593  941476 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 10:29:43.093619  941476 machine.go:97] duration metric: took 1.263130563s to provisionDockerMachine
	I1213 10:29:43.093630  941476 start.go:293] postStartSetup for "functional-200955" (driver="docker")
	I1213 10:29:43.093643  941476 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 10:29:43.093703  941476 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 10:29:43.093752  941476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-200955
	I1213 10:29:43.110551  941476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33523 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/functional-200955/id_rsa Username:docker}
	I1213 10:29:43.213067  941476 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 10:29:43.216076  941476 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1213 10:29:43.216096  941476 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1213 10:29:43.216102  941476 command_runner.go:130] > VERSION_ID="12"
	I1213 10:29:43.216108  941476 command_runner.go:130] > VERSION="12 (bookworm)"
	I1213 10:29:43.216112  941476 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1213 10:29:43.216116  941476 command_runner.go:130] > ID=debian
	I1213 10:29:43.216121  941476 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1213 10:29:43.216125  941476 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1213 10:29:43.216147  941476 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1213 10:29:43.216196  941476 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 10:29:43.216219  941476 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 10:29:43.216231  941476 filesync.go:126] Scanning /home/jenkins/minikube-integration/22128-904040/.minikube/addons for local assets ...
	I1213 10:29:43.216286  941476 filesync.go:126] Scanning /home/jenkins/minikube-integration/22128-904040/.minikube/files for local assets ...
	I1213 10:29:43.216365  941476 filesync.go:149] local asset: /home/jenkins/minikube-integration/22128-904040/.minikube/files/etc/ssl/certs/9074842.pem -> 9074842.pem in /etc/ssl/certs
	I1213 10:29:43.216375  941476 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-904040/.minikube/files/etc/ssl/certs/9074842.pem -> /etc/ssl/certs/9074842.pem
	I1213 10:29:43.216452  941476 filesync.go:149] local asset: /home/jenkins/minikube-integration/22128-904040/.minikube/files/etc/test/nested/copy/907484/hosts -> hosts in /etc/test/nested/copy/907484
	I1213 10:29:43.216461  941476 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-904040/.minikube/files/etc/test/nested/copy/907484/hosts -> /etc/test/nested/copy/907484/hosts
	I1213 10:29:43.216512  941476 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/907484
	I1213 10:29:43.223706  941476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/files/etc/ssl/certs/9074842.pem --> /etc/ssl/certs/9074842.pem (1708 bytes)
	I1213 10:29:43.242619  941476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/files/etc/test/nested/copy/907484/hosts --> /etc/test/nested/copy/907484/hosts (40 bytes)
	I1213 10:29:43.261652  941476 start.go:296] duration metric: took 168.007176ms for postStartSetup
	I1213 10:29:43.261748  941476 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 10:29:43.261797  941476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-200955
	I1213 10:29:43.278068  941476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33523 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/functional-200955/id_rsa Username:docker}
	I1213 10:29:43.377852  941476 command_runner.go:130] > 19%
	I1213 10:29:43.378272  941476 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 10:29:43.382521  941476 command_runner.go:130] > 159G
	I1213 10:29:43.382892  941476 fix.go:56] duration metric: took 1.572759496s for fixHost
	I1213 10:29:43.382913  941476 start.go:83] releasing machines lock for "functional-200955", held for 1.572809064s
	I1213 10:29:43.382984  941476 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-200955
	I1213 10:29:43.399315  941476 ssh_runner.go:195] Run: cat /version.json
	I1213 10:29:43.399334  941476 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 10:29:43.399371  941476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-200955
	I1213 10:29:43.399397  941476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-200955
	I1213 10:29:43.423081  941476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33523 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/functional-200955/id_rsa Username:docker}
	I1213 10:29:43.424445  941476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33523 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/functional-200955/id_rsa Username:docker}
	I1213 10:29:43.612877  941476 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1213 10:29:43.615557  941476 command_runner.go:130] > {"iso_version": "v1.37.0-1765151505-21409", "kicbase_version": "v0.0.48-1765275396-22083", "minikube_version": "v1.37.0", "commit": "9f3959633d311997d75aab86f8ff840f224c6486"}
	I1213 10:29:43.615725  941476 ssh_runner.go:195] Run: systemctl --version
	I1213 10:29:43.621711  941476 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1213 10:29:43.621746  941476 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1213 10:29:43.622124  941476 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 10:29:43.667216  941476 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1213 10:29:43.671902  941476 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1213 10:29:43.672160  941476 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 10:29:43.672241  941476 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 10:29:43.679969  941476 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 10:29:43.679994  941476 start.go:496] detecting cgroup driver to use...
	I1213 10:29:43.680025  941476 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 10:29:43.680082  941476 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 10:29:43.694816  941476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 10:29:43.708840  941476 docker.go:218] disabling cri-docker service (if available) ...
	I1213 10:29:43.708902  941476 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 10:29:43.727390  941476 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 10:29:43.741194  941476 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 10:29:43.853170  941476 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 10:29:43.965117  941476 docker.go:234] disabling docker service ...
	I1213 10:29:43.965193  941476 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 10:29:43.981069  941476 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 10:29:43.993651  941476 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 10:29:44.106510  941476 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 10:29:44.230950  941476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 10:29:44.243823  941476 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 10:29:44.258241  941476 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1213 10:29:44.259524  941476 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 10:29:44.259625  941476 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:29:44.267965  941476 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 10:29:44.268046  941476 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:29:44.277059  941476 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:29:44.285643  941476 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:29:44.295522  941476 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 10:29:44.303650  941476 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:29:44.312274  941476 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:29:44.320905  941476 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:29:44.329531  941476 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 10:29:44.336129  941476 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1213 10:29:44.337017  941476 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 10:29:44.344665  941476 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:29:44.479199  941476 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 10:29:44.656815  941476 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 10:29:44.656943  941476 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 10:29:44.660542  941476 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1213 10:29:44.660573  941476 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1213 10:29:44.660581  941476 command_runner.go:130] > Device: 0,72	Inode: 1640        Links: 1
	I1213 10:29:44.660588  941476 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1213 10:29:44.660594  941476 command_runner.go:130] > Access: 2025-12-13 10:29:44.589977594 +0000
	I1213 10:29:44.660602  941476 command_runner.go:130] > Modify: 2025-12-13 10:29:44.589977594 +0000
	I1213 10:29:44.660608  941476 command_runner.go:130] > Change: 2025-12-13 10:29:44.589977594 +0000
	I1213 10:29:44.660615  941476 command_runner.go:130] >  Birth: -
	I1213 10:29:44.660643  941476 start.go:564] Will wait 60s for crictl version
	I1213 10:29:44.660697  941476 ssh_runner.go:195] Run: which crictl
	I1213 10:29:44.664032  941476 command_runner.go:130] > /usr/local/bin/crictl
	I1213 10:29:44.664157  941476 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 10:29:44.686934  941476 command_runner.go:130] > Version:  0.1.0
	I1213 10:29:44.686958  941476 command_runner.go:130] > RuntimeName:  cri-o
	I1213 10:29:44.686965  941476 command_runner.go:130] > RuntimeVersion:  1.34.3
	I1213 10:29:44.686970  941476 command_runner.go:130] > RuntimeApiVersion:  v1
	I1213 10:29:44.687007  941476 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1213 10:29:44.687101  941476 ssh_runner.go:195] Run: crio --version
	I1213 10:29:44.715374  941476 command_runner.go:130] > crio version 1.34.3
	I1213 10:29:44.715400  941476 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1213 10:29:44.715407  941476 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1213 10:29:44.715412  941476 command_runner.go:130] >    GitTreeState:   dirty
	I1213 10:29:44.715417  941476 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1213 10:29:44.715422  941476 command_runner.go:130] >    GoVersion:      go1.24.6
	I1213 10:29:44.715435  941476 command_runner.go:130] >    Compiler:       gc
	I1213 10:29:44.715442  941476 command_runner.go:130] >    Platform:       linux/arm64
	I1213 10:29:44.715446  941476 command_runner.go:130] >    Linkmode:       static
	I1213 10:29:44.715453  941476 command_runner.go:130] >    BuildTags:
	I1213 10:29:44.715457  941476 command_runner.go:130] >      static
	I1213 10:29:44.715461  941476 command_runner.go:130] >      netgo
	I1213 10:29:44.715464  941476 command_runner.go:130] >      osusergo
	I1213 10:29:44.715476  941476 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1213 10:29:44.715480  941476 command_runner.go:130] >      seccomp
	I1213 10:29:44.715484  941476 command_runner.go:130] >      apparmor
	I1213 10:29:44.715492  941476 command_runner.go:130] >      selinux
	I1213 10:29:44.715496  941476 command_runner.go:130] >    LDFlags:          unknown
	I1213 10:29:44.715504  941476 command_runner.go:130] >    SeccompEnabled:   true
	I1213 10:29:44.715508  941476 command_runner.go:130] >    AppArmorEnabled:  false
	I1213 10:29:44.717596  941476 ssh_runner.go:195] Run: crio --version
	I1213 10:29:44.744267  941476 command_runner.go:130] > crio version 1.34.3
	I1213 10:29:44.744305  941476 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1213 10:29:44.744312  941476 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1213 10:29:44.744317  941476 command_runner.go:130] >    GitTreeState:   dirty
	I1213 10:29:44.744322  941476 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1213 10:29:44.744327  941476 command_runner.go:130] >    GoVersion:      go1.24.6
	I1213 10:29:44.744331  941476 command_runner.go:130] >    Compiler:       gc
	I1213 10:29:44.744337  941476 command_runner.go:130] >    Platform:       linux/arm64
	I1213 10:29:44.744341  941476 command_runner.go:130] >    Linkmode:       static
	I1213 10:29:44.744346  941476 command_runner.go:130] >    BuildTags:
	I1213 10:29:44.744350  941476 command_runner.go:130] >      static
	I1213 10:29:44.744376  941476 command_runner.go:130] >      netgo
	I1213 10:29:44.744385  941476 command_runner.go:130] >      osusergo
	I1213 10:29:44.744390  941476 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1213 10:29:44.744393  941476 command_runner.go:130] >      seccomp
	I1213 10:29:44.744397  941476 command_runner.go:130] >      apparmor
	I1213 10:29:44.744406  941476 command_runner.go:130] >      selinux
	I1213 10:29:44.744411  941476 command_runner.go:130] >    LDFlags:          unknown
	I1213 10:29:44.744419  941476 command_runner.go:130] >    SeccompEnabled:   true
	I1213 10:29:44.744424  941476 command_runner.go:130] >    AppArmorEnabled:  false
	I1213 10:29:44.751529  941476 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1213 10:29:44.754410  941476 cli_runner.go:164] Run: docker network inspect functional-200955 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 10:29:44.770603  941476 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1213 10:29:44.774419  941476 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1213 10:29:44.774622  941476 kubeadm.go:884] updating cluster {Name:functional-200955 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-200955 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 10:29:44.774752  941476 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 10:29:44.774840  941476 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 10:29:44.811833  941476 command_runner.go:130] > {
	I1213 10:29:44.811851  941476 command_runner.go:130] >   "images":  [
	I1213 10:29:44.811855  941476 command_runner.go:130] >     {
	I1213 10:29:44.811864  941476 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1213 10:29:44.811869  941476 command_runner.go:130] >       "repoTags":  [
	I1213 10:29:44.811875  941476 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1213 10:29:44.811879  941476 command_runner.go:130] >       ],
	I1213 10:29:44.811883  941476 command_runner.go:130] >       "repoDigests":  [
	I1213 10:29:44.811892  941476 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1213 10:29:44.811900  941476 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1213 10:29:44.811904  941476 command_runner.go:130] >       ],
	I1213 10:29:44.811908  941476 command_runner.go:130] >       "size":  "111333938",
	I1213 10:29:44.811912  941476 command_runner.go:130] >       "username":  "",
	I1213 10:29:44.811920  941476 command_runner.go:130] >       "pinned":  false
	I1213 10:29:44.811923  941476 command_runner.go:130] >     },
	I1213 10:29:44.811927  941476 command_runner.go:130] >     {
	I1213 10:29:44.811933  941476 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1213 10:29:44.811938  941476 command_runner.go:130] >       "repoTags":  [
	I1213 10:29:44.811944  941476 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1213 10:29:44.811947  941476 command_runner.go:130] >       ],
	I1213 10:29:44.811951  941476 command_runner.go:130] >       "repoDigests":  [
	I1213 10:29:44.811959  941476 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1213 10:29:44.811968  941476 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1213 10:29:44.811980  941476 command_runner.go:130] >       ],
	I1213 10:29:44.811984  941476 command_runner.go:130] >       "size":  "29037500",
	I1213 10:29:44.811988  941476 command_runner.go:130] >       "username":  "",
	I1213 10:29:44.811994  941476 command_runner.go:130] >       "pinned":  false
	I1213 10:29:44.811997  941476 command_runner.go:130] >     },
	I1213 10:29:44.812000  941476 command_runner.go:130] >     {
	I1213 10:29:44.812007  941476 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1213 10:29:44.812011  941476 command_runner.go:130] >       "repoTags":  [
	I1213 10:29:44.812017  941476 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1213 10:29:44.812020  941476 command_runner.go:130] >       ],
	I1213 10:29:44.812024  941476 command_runner.go:130] >       "repoDigests":  [
	I1213 10:29:44.812032  941476 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1213 10:29:44.812040  941476 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1213 10:29:44.812047  941476 command_runner.go:130] >       ],
	I1213 10:29:44.812051  941476 command_runner.go:130] >       "size":  "74491780",
	I1213 10:29:44.812056  941476 command_runner.go:130] >       "username":  "nonroot",
	I1213 10:29:44.812059  941476 command_runner.go:130] >       "pinned":  false
	I1213 10:29:44.812062  941476 command_runner.go:130] >     },
	I1213 10:29:44.812066  941476 command_runner.go:130] >     {
	I1213 10:29:44.812073  941476 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1213 10:29:44.812076  941476 command_runner.go:130] >       "repoTags":  [
	I1213 10:29:44.812081  941476 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1213 10:29:44.812085  941476 command_runner.go:130] >       ],
	I1213 10:29:44.812089  941476 command_runner.go:130] >       "repoDigests":  [
	I1213 10:29:44.812097  941476 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1213 10:29:44.812104  941476 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1213 10:29:44.812109  941476 command_runner.go:130] >       ],
	I1213 10:29:44.812113  941476 command_runner.go:130] >       "size":  "60857170",
	I1213 10:29:44.812116  941476 command_runner.go:130] >       "uid":  {
	I1213 10:29:44.812120  941476 command_runner.go:130] >         "value":  "0"
	I1213 10:29:44.812123  941476 command_runner.go:130] >       },
	I1213 10:29:44.812132  941476 command_runner.go:130] >       "username":  "",
	I1213 10:29:44.812136  941476 command_runner.go:130] >       "pinned":  false
	I1213 10:29:44.812143  941476 command_runner.go:130] >     },
	I1213 10:29:44.812146  941476 command_runner.go:130] >     {
	I1213 10:29:44.812152  941476 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1213 10:29:44.812156  941476 command_runner.go:130] >       "repoTags":  [
	I1213 10:29:44.812161  941476 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1213 10:29:44.812164  941476 command_runner.go:130] >       ],
	I1213 10:29:44.812168  941476 command_runner.go:130] >       "repoDigests":  [
	I1213 10:29:44.812176  941476 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1213 10:29:44.812184  941476 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1213 10:29:44.812187  941476 command_runner.go:130] >       ],
	I1213 10:29:44.812191  941476 command_runner.go:130] >       "size":  "84949999",
	I1213 10:29:44.812195  941476 command_runner.go:130] >       "uid":  {
	I1213 10:29:44.812198  941476 command_runner.go:130] >         "value":  "0"
	I1213 10:29:44.812201  941476 command_runner.go:130] >       },
	I1213 10:29:44.812204  941476 command_runner.go:130] >       "username":  "",
	I1213 10:29:44.812208  941476 command_runner.go:130] >       "pinned":  false
	I1213 10:29:44.812211  941476 command_runner.go:130] >     },
	I1213 10:29:44.812213  941476 command_runner.go:130] >     {
	I1213 10:29:44.812220  941476 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1213 10:29:44.812224  941476 command_runner.go:130] >       "repoTags":  [
	I1213 10:29:44.812230  941476 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1213 10:29:44.812233  941476 command_runner.go:130] >       ],
	I1213 10:29:44.812236  941476 command_runner.go:130] >       "repoDigests":  [
	I1213 10:29:44.812244  941476 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1213 10:29:44.812253  941476 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1213 10:29:44.812256  941476 command_runner.go:130] >       ],
	I1213 10:29:44.812259  941476 command_runner.go:130] >       "size":  "72170325",
	I1213 10:29:44.812263  941476 command_runner.go:130] >       "uid":  {
	I1213 10:29:44.812266  941476 command_runner.go:130] >         "value":  "0"
	I1213 10:29:44.812269  941476 command_runner.go:130] >       },
	I1213 10:29:44.812273  941476 command_runner.go:130] >       "username":  "",
	I1213 10:29:44.812277  941476 command_runner.go:130] >       "pinned":  false
	I1213 10:29:44.812280  941476 command_runner.go:130] >     },
	I1213 10:29:44.812286  941476 command_runner.go:130] >     {
	I1213 10:29:44.812293  941476 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1213 10:29:44.812296  941476 command_runner.go:130] >       "repoTags":  [
	I1213 10:29:44.812302  941476 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1213 10:29:44.812304  941476 command_runner.go:130] >       ],
	I1213 10:29:44.812308  941476 command_runner.go:130] >       "repoDigests":  [
	I1213 10:29:44.812316  941476 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1213 10:29:44.812323  941476 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1213 10:29:44.812326  941476 command_runner.go:130] >       ],
	I1213 10:29:44.812330  941476 command_runner.go:130] >       "size":  "74106775",
	I1213 10:29:44.812334  941476 command_runner.go:130] >       "username":  "",
	I1213 10:29:44.812337  941476 command_runner.go:130] >       "pinned":  false
	I1213 10:29:44.812340  941476 command_runner.go:130] >     },
	I1213 10:29:44.812343  941476 command_runner.go:130] >     {
	I1213 10:29:44.812349  941476 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1213 10:29:44.812353  941476 command_runner.go:130] >       "repoTags":  [
	I1213 10:29:44.812358  941476 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1213 10:29:44.812361  941476 command_runner.go:130] >       ],
	I1213 10:29:44.812364  941476 command_runner.go:130] >       "repoDigests":  [
	I1213 10:29:44.812372  941476 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1213 10:29:44.812390  941476 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1213 10:29:44.812393  941476 command_runner.go:130] >       ],
	I1213 10:29:44.812397  941476 command_runner.go:130] >       "size":  "49822549",
	I1213 10:29:44.812400  941476 command_runner.go:130] >       "uid":  {
	I1213 10:29:44.812405  941476 command_runner.go:130] >         "value":  "0"
	I1213 10:29:44.812408  941476 command_runner.go:130] >       },
	I1213 10:29:44.812412  941476 command_runner.go:130] >       "username":  "",
	I1213 10:29:44.812416  941476 command_runner.go:130] >       "pinned":  false
	I1213 10:29:44.812419  941476 command_runner.go:130] >     },
	I1213 10:29:44.812422  941476 command_runner.go:130] >     {
	I1213 10:29:44.812428  941476 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1213 10:29:44.812432  941476 command_runner.go:130] >       "repoTags":  [
	I1213 10:29:44.812436  941476 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1213 10:29:44.812442  941476 command_runner.go:130] >       ],
	I1213 10:29:44.812446  941476 command_runner.go:130] >       "repoDigests":  [
	I1213 10:29:44.812454  941476 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1213 10:29:44.812462  941476 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1213 10:29:44.812464  941476 command_runner.go:130] >       ],
	I1213 10:29:44.812468  941476 command_runner.go:130] >       "size":  "519884",
	I1213 10:29:44.812471  941476 command_runner.go:130] >       "uid":  {
	I1213 10:29:44.812475  941476 command_runner.go:130] >         "value":  "65535"
	I1213 10:29:44.812478  941476 command_runner.go:130] >       },
	I1213 10:29:44.812482  941476 command_runner.go:130] >       "username":  "",
	I1213 10:29:44.812485  941476 command_runner.go:130] >       "pinned":  true
	I1213 10:29:44.812488  941476 command_runner.go:130] >     }
	I1213 10:29:44.812491  941476 command_runner.go:130] >   ]
	I1213 10:29:44.812494  941476 command_runner.go:130] > }
	I1213 10:29:44.812656  941476 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 10:29:44.812664  941476 crio.go:433] Images already preloaded, skipping extraction
	I1213 10:29:44.812720  941476 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 10:29:44.834840  941476 command_runner.go:130] > {
	I1213 10:29:44.834859  941476 command_runner.go:130] >   "images":  [
	I1213 10:29:44.834863  941476 command_runner.go:130] >     {
	I1213 10:29:44.834871  941476 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1213 10:29:44.834878  941476 command_runner.go:130] >       "repoTags":  [
	I1213 10:29:44.834893  941476 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1213 10:29:44.834897  941476 command_runner.go:130] >       ],
	I1213 10:29:44.834903  941476 command_runner.go:130] >       "repoDigests":  [
	I1213 10:29:44.834913  941476 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1213 10:29:44.834921  941476 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1213 10:29:44.834924  941476 command_runner.go:130] >       ],
	I1213 10:29:44.834928  941476 command_runner.go:130] >       "size":  "111333938",
	I1213 10:29:44.834932  941476 command_runner.go:130] >       "username":  "",
	I1213 10:29:44.834941  941476 command_runner.go:130] >       "pinned":  false
	I1213 10:29:44.834944  941476 command_runner.go:130] >     },
	I1213 10:29:44.834947  941476 command_runner.go:130] >     {
	I1213 10:29:44.834953  941476 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1213 10:29:44.834957  941476 command_runner.go:130] >       "repoTags":  [
	I1213 10:29:44.834962  941476 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1213 10:29:44.834965  941476 command_runner.go:130] >       ],
	I1213 10:29:44.834969  941476 command_runner.go:130] >       "repoDigests":  [
	I1213 10:29:44.834977  941476 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1213 10:29:44.834986  941476 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1213 10:29:44.834989  941476 command_runner.go:130] >       ],
	I1213 10:29:44.834993  941476 command_runner.go:130] >       "size":  "29037500",
	I1213 10:29:44.834997  941476 command_runner.go:130] >       "username":  "",
	I1213 10:29:44.835006  941476 command_runner.go:130] >       "pinned":  false
	I1213 10:29:44.835009  941476 command_runner.go:130] >     },
	I1213 10:29:44.835013  941476 command_runner.go:130] >     {
	I1213 10:29:44.835019  941476 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1213 10:29:44.835023  941476 command_runner.go:130] >       "repoTags":  [
	I1213 10:29:44.835028  941476 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1213 10:29:44.835032  941476 command_runner.go:130] >       ],
	I1213 10:29:44.835036  941476 command_runner.go:130] >       "repoDigests":  [
	I1213 10:29:44.835044  941476 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1213 10:29:44.835052  941476 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1213 10:29:44.835055  941476 command_runner.go:130] >       ],
	I1213 10:29:44.835058  941476 command_runner.go:130] >       "size":  "74491780",
	I1213 10:29:44.835062  941476 command_runner.go:130] >       "username":  "nonroot",
	I1213 10:29:44.835066  941476 command_runner.go:130] >       "pinned":  false
	I1213 10:29:44.835069  941476 command_runner.go:130] >     },
	I1213 10:29:44.835073  941476 command_runner.go:130] >     {
	I1213 10:29:44.835080  941476 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1213 10:29:44.835083  941476 command_runner.go:130] >       "repoTags":  [
	I1213 10:29:44.835088  941476 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1213 10:29:44.835093  941476 command_runner.go:130] >       ],
	I1213 10:29:44.835100  941476 command_runner.go:130] >       "repoDigests":  [
	I1213 10:29:44.835108  941476 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1213 10:29:44.835116  941476 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1213 10:29:44.835119  941476 command_runner.go:130] >       ],
	I1213 10:29:44.835123  941476 command_runner.go:130] >       "size":  "60857170",
	I1213 10:29:44.835127  941476 command_runner.go:130] >       "uid":  {
	I1213 10:29:44.835131  941476 command_runner.go:130] >         "value":  "0"
	I1213 10:29:44.835134  941476 command_runner.go:130] >       },
	I1213 10:29:44.835147  941476 command_runner.go:130] >       "username":  "",
	I1213 10:29:44.835151  941476 command_runner.go:130] >       "pinned":  false
	I1213 10:29:44.835154  941476 command_runner.go:130] >     },
	I1213 10:29:44.835157  941476 command_runner.go:130] >     {
	I1213 10:29:44.835163  941476 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1213 10:29:44.835167  941476 command_runner.go:130] >       "repoTags":  [
	I1213 10:29:44.835172  941476 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1213 10:29:44.835175  941476 command_runner.go:130] >       ],
	I1213 10:29:44.835179  941476 command_runner.go:130] >       "repoDigests":  [
	I1213 10:29:44.835187  941476 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1213 10:29:44.835195  941476 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1213 10:29:44.835197  941476 command_runner.go:130] >       ],
	I1213 10:29:44.835201  941476 command_runner.go:130] >       "size":  "84949999",
	I1213 10:29:44.835205  941476 command_runner.go:130] >       "uid":  {
	I1213 10:29:44.835209  941476 command_runner.go:130] >         "value":  "0"
	I1213 10:29:44.835212  941476 command_runner.go:130] >       },
	I1213 10:29:44.835215  941476 command_runner.go:130] >       "username":  "",
	I1213 10:29:44.835219  941476 command_runner.go:130] >       "pinned":  false
	I1213 10:29:44.835222  941476 command_runner.go:130] >     },
	I1213 10:29:44.835224  941476 command_runner.go:130] >     {
	I1213 10:29:44.835231  941476 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1213 10:29:44.835234  941476 command_runner.go:130] >       "repoTags":  [
	I1213 10:29:44.835240  941476 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1213 10:29:44.835243  941476 command_runner.go:130] >       ],
	I1213 10:29:44.835247  941476 command_runner.go:130] >       "repoDigests":  [
	I1213 10:29:44.835261  941476 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1213 10:29:44.835270  941476 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1213 10:29:44.835273  941476 command_runner.go:130] >       ],
	I1213 10:29:44.835277  941476 command_runner.go:130] >       "size":  "72170325",
	I1213 10:29:44.835281  941476 command_runner.go:130] >       "uid":  {
	I1213 10:29:44.835285  941476 command_runner.go:130] >         "value":  "0"
	I1213 10:29:44.835288  941476 command_runner.go:130] >       },
	I1213 10:29:44.835292  941476 command_runner.go:130] >       "username":  "",
	I1213 10:29:44.835295  941476 command_runner.go:130] >       "pinned":  false
	I1213 10:29:44.835298  941476 command_runner.go:130] >     },
	I1213 10:29:44.835302  941476 command_runner.go:130] >     {
	I1213 10:29:44.835309  941476 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1213 10:29:44.835312  941476 command_runner.go:130] >       "repoTags":  [
	I1213 10:29:44.835318  941476 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1213 10:29:44.835320  941476 command_runner.go:130] >       ],
	I1213 10:29:44.835324  941476 command_runner.go:130] >       "repoDigests":  [
	I1213 10:29:44.835332  941476 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1213 10:29:44.835340  941476 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1213 10:29:44.835343  941476 command_runner.go:130] >       ],
	I1213 10:29:44.835347  941476 command_runner.go:130] >       "size":  "74106775",
	I1213 10:29:44.835351  941476 command_runner.go:130] >       "username":  "",
	I1213 10:29:44.835355  941476 command_runner.go:130] >       "pinned":  false
	I1213 10:29:44.835358  941476 command_runner.go:130] >     },
	I1213 10:29:44.835361  941476 command_runner.go:130] >     {
	I1213 10:29:44.835367  941476 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1213 10:29:44.835371  941476 command_runner.go:130] >       "repoTags":  [
	I1213 10:29:44.835376  941476 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1213 10:29:44.835379  941476 command_runner.go:130] >       ],
	I1213 10:29:44.835383  941476 command_runner.go:130] >       "repoDigests":  [
	I1213 10:29:44.835390  941476 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1213 10:29:44.835407  941476 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1213 10:29:44.835411  941476 command_runner.go:130] >       ],
	I1213 10:29:44.835415  941476 command_runner.go:130] >       "size":  "49822549",
	I1213 10:29:44.835422  941476 command_runner.go:130] >       "uid":  {
	I1213 10:29:44.835426  941476 command_runner.go:130] >         "value":  "0"
	I1213 10:29:44.835429  941476 command_runner.go:130] >       },
	I1213 10:29:44.835433  941476 command_runner.go:130] >       "username":  "",
	I1213 10:29:44.835436  941476 command_runner.go:130] >       "pinned":  false
	I1213 10:29:44.835439  941476 command_runner.go:130] >     },
	I1213 10:29:44.835442  941476 command_runner.go:130] >     {
	I1213 10:29:44.835449  941476 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1213 10:29:44.835452  941476 command_runner.go:130] >       "repoTags":  [
	I1213 10:29:44.835457  941476 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1213 10:29:44.835460  941476 command_runner.go:130] >       ],
	I1213 10:29:44.835463  941476 command_runner.go:130] >       "repoDigests":  [
	I1213 10:29:44.835470  941476 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1213 10:29:44.835478  941476 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1213 10:29:44.835481  941476 command_runner.go:130] >       ],
	I1213 10:29:44.835485  941476 command_runner.go:130] >       "size":  "519884",
	I1213 10:29:44.835489  941476 command_runner.go:130] >       "uid":  {
	I1213 10:29:44.835492  941476 command_runner.go:130] >         "value":  "65535"
	I1213 10:29:44.835495  941476 command_runner.go:130] >       },
	I1213 10:29:44.835499  941476 command_runner.go:130] >       "username":  "",
	I1213 10:29:44.835503  941476 command_runner.go:130] >       "pinned":  true
	I1213 10:29:44.835506  941476 command_runner.go:130] >     }
	I1213 10:29:44.835508  941476 command_runner.go:130] >   ]
	I1213 10:29:44.835512  941476 command_runner.go:130] > }
	I1213 10:29:44.838144  941476 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 10:29:44.838206  941476 cache_images.go:86] Images are preloaded, skipping loading
	I1213 10:29:44.838219  941476 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1213 10:29:44.838324  941476 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-200955 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-200955 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 10:29:44.838426  941476 ssh_runner.go:195] Run: crio config
	I1213 10:29:44.886075  941476 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1213 10:29:44.886098  941476 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1213 10:29:44.886106  941476 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1213 10:29:44.886110  941476 command_runner.go:130] > #
	I1213 10:29:44.886117  941476 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1213 10:29:44.886124  941476 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1213 10:29:44.886130  941476 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1213 10:29:44.886139  941476 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1213 10:29:44.886142  941476 command_runner.go:130] > # reload'.
	I1213 10:29:44.886162  941476 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1213 10:29:44.886169  941476 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1213 10:29:44.886175  941476 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1213 10:29:44.886181  941476 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1213 10:29:44.886184  941476 command_runner.go:130] > [crio]
	I1213 10:29:44.886190  941476 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1213 10:29:44.886195  941476 command_runner.go:130] > # containers images, in this directory.
	I1213 10:29:44.886932  941476 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1213 10:29:44.886948  941476 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1213 10:29:44.887520  941476 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1213 10:29:44.887536  941476 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1213 10:29:44.887990  941476 command_runner.go:130] > # imagestore = ""
	I1213 10:29:44.888002  941476 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1213 10:29:44.888019  941476 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1213 10:29:44.888390  941476 command_runner.go:130] > # storage_driver = "overlay"
	I1213 10:29:44.888402  941476 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1213 10:29:44.888409  941476 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1213 10:29:44.888578  941476 command_runner.go:130] > # storage_option = [
	I1213 10:29:44.888743  941476 command_runner.go:130] > # ]
	I1213 10:29:44.888754  941476 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1213 10:29:44.888761  941476 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1213 10:29:44.888765  941476 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1213 10:29:44.888771  941476 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1213 10:29:44.888787  941476 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1213 10:29:44.888792  941476 command_runner.go:130] > # always happen on a node reboot
	I1213 10:29:44.888797  941476 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1213 10:29:44.888807  941476 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1213 10:29:44.888813  941476 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1213 10:29:44.888818  941476 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1213 10:29:44.888822  941476 command_runner.go:130] > # version_file_persist = ""
	I1213 10:29:44.888829  941476 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1213 10:29:44.888839  941476 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1213 10:29:44.888843  941476 command_runner.go:130] > # internal_wipe = true
	I1213 10:29:44.888851  941476 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1213 10:29:44.888856  941476 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1213 10:29:44.888860  941476 command_runner.go:130] > # internal_repair = true
	I1213 10:29:44.888869  941476 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1213 10:29:44.888875  941476 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1213 10:29:44.888881  941476 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1213 10:29:44.888886  941476 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1213 10:29:44.888892  941476 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1213 10:29:44.888895  941476 command_runner.go:130] > [crio.api]
	I1213 10:29:44.888901  941476 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1213 10:29:44.888905  941476 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1213 10:29:44.888910  941476 command_runner.go:130] > # IP address on which the stream server will listen.
	I1213 10:29:44.888914  941476 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1213 10:29:44.888921  941476 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1213 10:29:44.888926  941476 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1213 10:29:44.888929  941476 command_runner.go:130] > # stream_port = "0"
	I1213 10:29:44.888934  941476 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1213 10:29:44.888938  941476 command_runner.go:130] > # stream_enable_tls = false
	I1213 10:29:44.888944  941476 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1213 10:29:44.889110  941476 command_runner.go:130] > # stream_idle_timeout = ""
	I1213 10:29:44.889121  941476 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1213 10:29:44.889127  941476 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1213 10:29:44.889131  941476 command_runner.go:130] > # stream_tls_cert = ""
	I1213 10:29:44.889137  941476 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1213 10:29:44.889143  941476 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1213 10:29:44.889156  941476 command_runner.go:130] > # stream_tls_key = ""
	I1213 10:29:44.889162  941476 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1213 10:29:44.889169  941476 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1213 10:29:44.889177  941476 command_runner.go:130] > # automatically pick up the changes.
	I1213 10:29:44.889181  941476 command_runner.go:130] > # stream_tls_ca = ""
	I1213 10:29:44.889197  941476 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1213 10:29:44.889202  941476 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1213 10:29:44.889209  941476 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1213 10:29:44.889214  941476 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1213 10:29:44.889220  941476 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1213 10:29:44.889225  941476 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1213 10:29:44.889229  941476 command_runner.go:130] > [crio.runtime]
	I1213 10:29:44.889235  941476 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1213 10:29:44.889240  941476 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1213 10:29:44.889244  941476 command_runner.go:130] > # "nofile=1024:2048"
	I1213 10:29:44.889253  941476 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1213 10:29:44.889257  941476 command_runner.go:130] > # default_ulimits = [
	I1213 10:29:44.889260  941476 command_runner.go:130] > # ]
	I1213 10:29:44.889265  941476 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1213 10:29:44.889269  941476 command_runner.go:130] > # no_pivot = false
	I1213 10:29:44.889274  941476 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1213 10:29:44.889280  941476 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1213 10:29:44.889285  941476 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1213 10:29:44.889291  941476 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1213 10:29:44.889296  941476 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1213 10:29:44.889318  941476 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1213 10:29:44.889322  941476 command_runner.go:130] > # conmon = ""
	I1213 10:29:44.889327  941476 command_runner.go:130] > # Cgroup setting for conmon
	I1213 10:29:44.889333  941476 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1213 10:29:44.889512  941476 command_runner.go:130] > conmon_cgroup = "pod"
	I1213 10:29:44.889563  941476 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1213 10:29:44.889585  941476 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1213 10:29:44.889610  941476 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1213 10:29:44.889647  941476 command_runner.go:130] > # conmon_env = [
	I1213 10:29:44.889671  941476 command_runner.go:130] > # ]
	I1213 10:29:44.889696  941476 command_runner.go:130] > # Additional environment variables to set for all the
	I1213 10:29:44.889721  941476 command_runner.go:130] > # containers. These are overridden if set in the
	I1213 10:29:44.889753  941476 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1213 10:29:44.889776  941476 command_runner.go:130] > # default_env = [
	I1213 10:29:44.889797  941476 command_runner.go:130] > # ]
	I1213 10:29:44.889822  941476 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1213 10:29:44.889858  941476 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1213 10:29:44.889885  941476 command_runner.go:130] > # selinux = false
	I1213 10:29:44.889906  941476 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1213 10:29:44.889932  941476 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1213 10:29:44.889962  941476 command_runner.go:130] > # This option supports live configuration reload.
	I1213 10:29:44.889985  941476 command_runner.go:130] > # seccomp_profile = ""
	I1213 10:29:44.890009  941476 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1213 10:29:44.890029  941476 command_runner.go:130] > # This option supports live configuration reload.
	I1213 10:29:44.890061  941476 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1213 10:29:44.890087  941476 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1213 10:29:44.890109  941476 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1213 10:29:44.890133  941476 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1213 10:29:44.890166  941476 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1213 10:29:44.890191  941476 command_runner.go:130] > # This option supports live configuration reload.
	I1213 10:29:44.890212  941476 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1213 10:29:44.890236  941476 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1213 10:29:44.890284  941476 command_runner.go:130] > # the cgroup blockio controller.
	I1213 10:29:44.890307  941476 command_runner.go:130] > # blockio_config_file = ""
	I1213 10:29:44.890329  941476 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1213 10:29:44.890350  941476 command_runner.go:130] > # blockio parameters.
	I1213 10:29:44.890409  941476 command_runner.go:130] > # blockio_reload = false
	I1213 10:29:44.890437  941476 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1213 10:29:44.890458  941476 command_runner.go:130] > # irqbalance daemon.
	I1213 10:29:44.890483  941476 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1213 10:29:44.890515  941476 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1213 10:29:44.890551  941476 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1213 10:29:44.890575  941476 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1213 10:29:44.890599  941476 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1213 10:29:44.890631  941476 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1213 10:29:44.890655  941476 command_runner.go:130] > # This option supports live configuration reload.
	I1213 10:29:44.890676  941476 command_runner.go:130] > # rdt_config_file = ""
	I1213 10:29:44.890716  941476 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1213 10:29:44.890743  941476 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1213 10:29:44.890767  941476 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1213 10:29:44.890788  941476 command_runner.go:130] > # separate_pull_cgroup = ""
	I1213 10:29:44.890824  941476 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1213 10:29:44.890863  941476 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1213 10:29:44.890886  941476 command_runner.go:130] > # will be added.
	I1213 10:29:44.890904  941476 command_runner.go:130] > # default_capabilities = [
	I1213 10:29:44.890932  941476 command_runner.go:130] > # 	"CHOWN",
	I1213 10:29:44.890957  941476 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1213 10:29:44.891256  941476 command_runner.go:130] > # 	"FSETID",
	I1213 10:29:44.891291  941476 command_runner.go:130] > # 	"FOWNER",
	I1213 10:29:44.891318  941476 command_runner.go:130] > # 	"SETGID",
	I1213 10:29:44.891335  941476 command_runner.go:130] > # 	"SETUID",
	I1213 10:29:44.891390  941476 command_runner.go:130] > # 	"SETPCAP",
	I1213 10:29:44.891416  941476 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1213 10:29:44.891438  941476 command_runner.go:130] > # 	"KILL",
	I1213 10:29:44.891461  941476 command_runner.go:130] > # ]
	I1213 10:29:44.891498  941476 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1213 10:29:44.891527  941476 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1213 10:29:44.891550  941476 command_runner.go:130] > # add_inheritable_capabilities = false
	I1213 10:29:44.891572  941476 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1213 10:29:44.891606  941476 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1213 10:29:44.891629  941476 command_runner.go:130] > default_sysctls = [
	I1213 10:29:44.891651  941476 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1213 10:29:44.891671  941476 command_runner.go:130] > ]
	I1213 10:29:44.891705  941476 command_runner.go:130] > # List of devices on the host that a
	I1213 10:29:44.891730  941476 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1213 10:29:44.891749  941476 command_runner.go:130] > # allowed_devices = [
	I1213 10:29:44.891779  941476 command_runner.go:130] > # 	"/dev/fuse",
	I1213 10:29:44.891809  941476 command_runner.go:130] > # 	"/dev/net/tun",
	I1213 10:29:44.891834  941476 command_runner.go:130] > # ]
	I1213 10:29:44.891856  941476 command_runner.go:130] > # List of additional devices. specified as
	I1213 10:29:44.891880  941476 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1213 10:29:44.891914  941476 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1213 10:29:44.891940  941476 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1213 10:29:44.891962  941476 command_runner.go:130] > # additional_devices = [
	I1213 10:29:44.891983  941476 command_runner.go:130] > # ]
	I1213 10:29:44.892017  941476 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1213 10:29:44.892041  941476 command_runner.go:130] > # cdi_spec_dirs = [
	I1213 10:29:44.892063  941476 command_runner.go:130] > # 	"/etc/cdi",
	I1213 10:29:44.892082  941476 command_runner.go:130] > # 	"/var/run/cdi",
	I1213 10:29:44.892103  941476 command_runner.go:130] > # ]
	I1213 10:29:44.892139  941476 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1213 10:29:44.892161  941476 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1213 10:29:44.892183  941476 command_runner.go:130] > # Defaults to false.
	I1213 10:29:44.892215  941476 command_runner.go:130] > # device_ownership_from_security_context = false
	I1213 10:29:44.892243  941476 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1213 10:29:44.892267  941476 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1213 10:29:44.892287  941476 command_runner.go:130] > # hooks_dir = [
	I1213 10:29:44.892324  941476 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1213 10:29:44.892349  941476 command_runner.go:130] > # ]
	I1213 10:29:44.892371  941476 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1213 10:29:44.892394  941476 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1213 10:29:44.892427  941476 command_runner.go:130] > # its default mounts from the following two files:
	I1213 10:29:44.892450  941476 command_runner.go:130] > #
	I1213 10:29:44.892472  941476 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1213 10:29:44.892496  941476 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1213 10:29:44.892529  941476 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1213 10:29:44.892555  941476 command_runner.go:130] > #
	I1213 10:29:44.892582  941476 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1213 10:29:44.892608  941476 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1213 10:29:44.892654  941476 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1213 10:29:44.892680  941476 command_runner.go:130] > #      only add mounts it finds in this file.
	I1213 10:29:44.892700  941476 command_runner.go:130] > #
	I1213 10:29:44.892722  941476 command_runner.go:130] > # default_mounts_file = ""
	I1213 10:29:44.892742  941476 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1213 10:29:44.892779  941476 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1213 10:29:44.892797  941476 command_runner.go:130] > # pids_limit = -1
	I1213 10:29:44.892825  941476 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1213 10:29:44.892860  941476 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1213 10:29:44.892886  941476 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1213 10:29:44.892912  941476 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1213 10:29:44.892937  941476 command_runner.go:130] > # log_size_max = -1
	I1213 10:29:44.892967  941476 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1213 10:29:44.892992  941476 command_runner.go:130] > # log_to_journald = false
	I1213 10:29:44.893016  941476 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1213 10:29:44.893040  941476 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1213 10:29:44.893073  941476 command_runner.go:130] > # Path to directory for container attach sockets.
	I1213 10:29:44.893097  941476 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1213 10:29:44.893118  941476 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1213 10:29:44.893142  941476 command_runner.go:130] > # bind_mount_prefix = ""
	I1213 10:29:44.893174  941476 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1213 10:29:44.893198  941476 command_runner.go:130] > # read_only = false
	I1213 10:29:44.893223  941476 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1213 10:29:44.893245  941476 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1213 10:29:44.893278  941476 command_runner.go:130] > # live configuration reload.
	I1213 10:29:44.893302  941476 command_runner.go:130] > # log_level = "info"
	I1213 10:29:44.893331  941476 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1213 10:29:44.893353  941476 command_runner.go:130] > # This option supports live configuration reload.
	I1213 10:29:44.893380  941476 command_runner.go:130] > # log_filter = ""
	I1213 10:29:44.893406  941476 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1213 10:29:44.893430  941476 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1213 10:29:44.893452  941476 command_runner.go:130] > # separated by comma.
	I1213 10:29:44.893486  941476 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1213 10:29:44.893520  941476 command_runner.go:130] > # uid_mappings = ""
	I1213 10:29:44.893564  941476 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1213 10:29:44.893593  941476 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1213 10:29:44.893617  941476 command_runner.go:130] > # separated by comma.
	I1213 10:29:44.893643  941476 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1213 10:29:44.893997  941476 command_runner.go:130] > # gid_mappings = ""
	I1213 10:29:44.894010  941476 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1213 10:29:44.894017  941476 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1213 10:29:44.894024  941476 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1213 10:29:44.894032  941476 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1213 10:29:44.894037  941476 command_runner.go:130] > # minimum_mappable_uid = -1
	I1213 10:29:44.894043  941476 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1213 10:29:44.894050  941476 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1213 10:29:44.894056  941476 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1213 10:29:44.894064  941476 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1213 10:29:44.894068  941476 command_runner.go:130] > # minimum_mappable_gid = -1
	I1213 10:29:44.894074  941476 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1213 10:29:44.894081  941476 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1213 10:29:44.894086  941476 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1213 10:29:44.894090  941476 command_runner.go:130] > # ctr_stop_timeout = 30
	I1213 10:29:44.894096  941476 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1213 10:29:44.894102  941476 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1213 10:29:44.894107  941476 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1213 10:29:44.894111  941476 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1213 10:29:44.894115  941476 command_runner.go:130] > # drop_infra_ctr = true
	I1213 10:29:44.894121  941476 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1213 10:29:44.894127  941476 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1213 10:29:44.894135  941476 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1213 10:29:44.894141  941476 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1213 10:29:44.894149  941476 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1213 10:29:44.894155  941476 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1213 10:29:44.894160  941476 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1213 10:29:44.894165  941476 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1213 10:29:44.894173  941476 command_runner.go:130] > # shared_cpuset = ""
	I1213 10:29:44.894179  941476 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1213 10:29:44.894184  941476 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1213 10:29:44.894188  941476 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1213 10:29:44.894195  941476 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1213 10:29:44.894199  941476 command_runner.go:130] > # pinns_path = ""
	I1213 10:29:44.894204  941476 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1213 10:29:44.894210  941476 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1213 10:29:44.894216  941476 command_runner.go:130] > # enable_criu_support = true
	I1213 10:29:44.894223  941476 command_runner.go:130] > # Enable/disable the generation of the container,
	I1213 10:29:44.894229  941476 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1213 10:29:44.894234  941476 command_runner.go:130] > # enable_pod_events = false
	I1213 10:29:44.894240  941476 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1213 10:29:44.894245  941476 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1213 10:29:44.894249  941476 command_runner.go:130] > # default_runtime = "crun"
	I1213 10:29:44.894254  941476 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1213 10:29:44.894261  941476 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1213 10:29:44.894271  941476 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1213 10:29:44.894276  941476 command_runner.go:130] > # creation as a file is not desired either.
	I1213 10:29:44.894284  941476 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1213 10:29:44.894289  941476 command_runner.go:130] > # the hostname is being managed dynamically.
	I1213 10:29:44.894293  941476 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1213 10:29:44.894297  941476 command_runner.go:130] > # ]
	I1213 10:29:44.894303  941476 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1213 10:29:44.894309  941476 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1213 10:29:44.894316  941476 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1213 10:29:44.894321  941476 command_runner.go:130] > # Each entry in the table should follow the format:
	I1213 10:29:44.894324  941476 command_runner.go:130] > #
	I1213 10:29:44.894329  941476 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1213 10:29:44.894333  941476 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1213 10:29:44.894337  941476 command_runner.go:130] > # runtime_type = "oci"
	I1213 10:29:44.894342  941476 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1213 10:29:44.894348  941476 command_runner.go:130] > # inherit_default_runtime = false
	I1213 10:29:44.894367  941476 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1213 10:29:44.894372  941476 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1213 10:29:44.894377  941476 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1213 10:29:44.894381  941476 command_runner.go:130] > # monitor_env = []
	I1213 10:29:44.894386  941476 command_runner.go:130] > # privileged_without_host_devices = false
	I1213 10:29:44.894390  941476 command_runner.go:130] > # allowed_annotations = []
	I1213 10:29:44.894395  941476 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1213 10:29:44.894399  941476 command_runner.go:130] > # no_sync_log = false
	I1213 10:29:44.894403  941476 command_runner.go:130] > # default_annotations = {}
	I1213 10:29:44.894407  941476 command_runner.go:130] > # stream_websockets = false
	I1213 10:29:44.894411  941476 command_runner.go:130] > # seccomp_profile = ""
	I1213 10:29:44.894442  941476 command_runner.go:130] > # Where:
	I1213 10:29:44.894448  941476 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1213 10:29:44.894454  941476 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1213 10:29:44.894461  941476 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1213 10:29:44.894468  941476 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1213 10:29:44.894471  941476 command_runner.go:130] > #   in $PATH.
	I1213 10:29:44.894478  941476 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1213 10:29:44.894482  941476 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1213 10:29:44.894488  941476 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1213 10:29:44.894492  941476 command_runner.go:130] > #   state.
	I1213 10:29:44.894498  941476 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1213 10:29:44.894504  941476 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1213 10:29:44.894510  941476 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1213 10:29:44.894516  941476 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1213 10:29:44.894521  941476 command_runner.go:130] > #   the values from the default runtime on load time.
	I1213 10:29:44.894527  941476 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1213 10:29:44.894533  941476 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1213 10:29:44.894539  941476 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1213 10:29:44.894545  941476 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1213 10:29:44.894550  941476 command_runner.go:130] > #   The currently recognized values are:
	I1213 10:29:44.894557  941476 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1213 10:29:44.894564  941476 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1213 10:29:44.894574  941476 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1213 10:29:44.894580  941476 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1213 10:29:44.894588  941476 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1213 10:29:44.894596  941476 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1213 10:29:44.894602  941476 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1213 10:29:44.894608  941476 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1213 10:29:44.894614  941476 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1213 10:29:44.894621  941476 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1213 10:29:44.894628  941476 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1213 10:29:44.894634  941476 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1213 10:29:44.894640  941476 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1213 10:29:44.894646  941476 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1213 10:29:44.894652  941476 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1213 10:29:44.894661  941476 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1213 10:29:44.894667  941476 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1213 10:29:44.894672  941476 command_runner.go:130] > #   deprecated option "conmon".
	I1213 10:29:44.894679  941476 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1213 10:29:44.894684  941476 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1213 10:29:44.894691  941476 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1213 10:29:44.894695  941476 command_runner.go:130] > #   should be moved to the container's cgroup
	I1213 10:29:44.894702  941476 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1213 10:29:44.894707  941476 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1213 10:29:44.894714  941476 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1213 10:29:44.894718  941476 command_runner.go:130] > #   conmon-rs by using:
	I1213 10:29:44.894726  941476 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1213 10:29:44.894734  941476 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1213 10:29:44.894742  941476 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1213 10:29:44.894748  941476 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1213 10:29:44.894753  941476 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1213 10:29:44.894760  941476 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1213 10:29:44.894768  941476 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1213 10:29:44.894774  941476 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1213 10:29:44.894782  941476 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1213 10:29:44.894794  941476 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1213 10:29:44.894798  941476 command_runner.go:130] > #   when a machine crash happens.
	I1213 10:29:44.894805  941476 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1213 10:29:44.894813  941476 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1213 10:29:44.894821  941476 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1213 10:29:44.894825  941476 command_runner.go:130] > #   seccomp profile for the runtime.
	I1213 10:29:44.894838  941476 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1213 10:29:44.894848  941476 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1213 10:29:44.894851  941476 command_runner.go:130] > #
	I1213 10:29:44.894855  941476 command_runner.go:130] > # Using the seccomp notifier feature:
	I1213 10:29:44.894859  941476 command_runner.go:130] > #
	I1213 10:29:44.894866  941476 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1213 10:29:44.894872  941476 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1213 10:29:44.894878  941476 command_runner.go:130] > #
	I1213 10:29:44.894887  941476 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1213 10:29:44.894893  941476 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1213 10:29:44.894896  941476 command_runner.go:130] > #
	I1213 10:29:44.894903  941476 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1213 10:29:44.894906  941476 command_runner.go:130] > # feature.
	I1213 10:29:44.894909  941476 command_runner.go:130] > #
	I1213 10:29:44.894914  941476 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1213 10:29:44.894921  941476 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1213 10:29:44.894927  941476 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1213 10:29:44.894933  941476 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1213 10:29:44.894939  941476 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1213 10:29:44.894942  941476 command_runner.go:130] > #
	I1213 10:29:44.894948  941476 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1213 10:29:44.894954  941476 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1213 10:29:44.894957  941476 command_runner.go:130] > #
	I1213 10:29:44.894963  941476 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1213 10:29:44.894968  941476 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1213 10:29:44.894971  941476 command_runner.go:130] > #
	I1213 10:29:44.894977  941476 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1213 10:29:44.894987  941476 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1213 10:29:44.894991  941476 command_runner.go:130] > # limitation.
	I1213 10:29:44.894995  941476 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1213 10:29:44.895000  941476 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1213 10:29:44.895004  941476 command_runner.go:130] > runtime_type = ""
	I1213 10:29:44.895008  941476 command_runner.go:130] > runtime_root = "/run/crun"
	I1213 10:29:44.895013  941476 command_runner.go:130] > inherit_default_runtime = false
	I1213 10:29:44.895016  941476 command_runner.go:130] > runtime_config_path = ""
	I1213 10:29:44.895020  941476 command_runner.go:130] > container_min_memory = ""
	I1213 10:29:44.895025  941476 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1213 10:29:44.895028  941476 command_runner.go:130] > monitor_cgroup = "pod"
	I1213 10:29:44.895032  941476 command_runner.go:130] > monitor_exec_cgroup = ""
	I1213 10:29:44.895036  941476 command_runner.go:130] > allowed_annotations = [
	I1213 10:29:44.895040  941476 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1213 10:29:44.895043  941476 command_runner.go:130] > ]
	I1213 10:29:44.895047  941476 command_runner.go:130] > privileged_without_host_devices = false
	I1213 10:29:44.895051  941476 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1213 10:29:44.895056  941476 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1213 10:29:44.895059  941476 command_runner.go:130] > runtime_type = ""
	I1213 10:29:44.895064  941476 command_runner.go:130] > runtime_root = "/run/runc"
	I1213 10:29:44.895069  941476 command_runner.go:130] > inherit_default_runtime = false
	I1213 10:29:44.895072  941476 command_runner.go:130] > runtime_config_path = ""
	I1213 10:29:44.895076  941476 command_runner.go:130] > container_min_memory = ""
	I1213 10:29:44.895081  941476 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1213 10:29:44.895084  941476 command_runner.go:130] > monitor_cgroup = "pod"
	I1213 10:29:44.895089  941476 command_runner.go:130] > monitor_exec_cgroup = ""
	I1213 10:29:44.895093  941476 command_runner.go:130] > privileged_without_host_devices = false
	I1213 10:29:44.895100  941476 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1213 10:29:44.895105  941476 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1213 10:29:44.895111  941476 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1213 10:29:44.895119  941476 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1213 10:29:44.895129  941476 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1213 10:29:44.895139  941476 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1213 10:29:44.895151  941476 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1213 10:29:44.895156  941476 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1213 10:29:44.895166  941476 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1213 10:29:44.895174  941476 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1213 10:29:44.895181  941476 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1213 10:29:44.895188  941476 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1213 10:29:44.895191  941476 command_runner.go:130] > # Example:
	I1213 10:29:44.895196  941476 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1213 10:29:44.895201  941476 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1213 10:29:44.895207  941476 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1213 10:29:44.895212  941476 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1213 10:29:44.895216  941476 command_runner.go:130] > # cpuset = "0-1"
	I1213 10:29:44.895219  941476 command_runner.go:130] > # cpushares = "5"
	I1213 10:29:44.895223  941476 command_runner.go:130] > # cpuquota = "1000"
	I1213 10:29:44.895227  941476 command_runner.go:130] > # cpuperiod = "100000"
	I1213 10:29:44.895230  941476 command_runner.go:130] > # cpulimit = "35"
	I1213 10:29:44.895234  941476 command_runner.go:130] > # Where:
	I1213 10:29:44.895238  941476 command_runner.go:130] > # The workload name is workload-type.
	I1213 10:29:44.895245  941476 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1213 10:29:44.895250  941476 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1213 10:29:44.895259  941476 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1213 10:29:44.895267  941476 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1213 10:29:44.895274  941476 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1213 10:29:44.895279  941476 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1213 10:29:44.895286  941476 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1213 10:29:44.895290  941476 command_runner.go:130] > # Default value is set to true
	I1213 10:29:44.895294  941476 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1213 10:29:44.895300  941476 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1213 10:29:44.895305  941476 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1213 10:29:44.895309  941476 command_runner.go:130] > # Default value is set to 'false'
	I1213 10:29:44.895313  941476 command_runner.go:130] > # disable_hostport_mapping = false
	I1213 10:29:44.895318  941476 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1213 10:29:44.895326  941476 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1213 10:29:44.895334  941476 command_runner.go:130] > # timezone = ""
	I1213 10:29:44.895341  941476 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1213 10:29:44.895343  941476 command_runner.go:130] > #
	I1213 10:29:44.895349  941476 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1213 10:29:44.895355  941476 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1213 10:29:44.895358  941476 command_runner.go:130] > [crio.image]
	I1213 10:29:44.895364  941476 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1213 10:29:44.895368  941476 command_runner.go:130] > # default_transport = "docker://"
	I1213 10:29:44.895373  941476 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1213 10:29:44.895380  941476 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1213 10:29:44.895383  941476 command_runner.go:130] > # global_auth_file = ""
	I1213 10:29:44.895388  941476 command_runner.go:130] > # The image used to instantiate infra containers.
	I1213 10:29:44.895393  941476 command_runner.go:130] > # This option supports live configuration reload.
	I1213 10:29:44.895398  941476 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1213 10:29:44.895404  941476 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1213 10:29:44.895412  941476 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1213 10:29:44.895417  941476 command_runner.go:130] > # This option supports live configuration reload.
	I1213 10:29:44.895420  941476 command_runner.go:130] > # pause_image_auth_file = ""
	I1213 10:29:44.895426  941476 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1213 10:29:44.895432  941476 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1213 10:29:44.895438  941476 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1213 10:29:44.895444  941476 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1213 10:29:44.895448  941476 command_runner.go:130] > # pause_command = "/pause"
	I1213 10:29:44.895454  941476 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1213 10:29:44.895460  941476 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1213 10:29:44.895467  941476 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1213 10:29:44.895473  941476 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1213 10:29:44.895479  941476 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1213 10:29:44.895485  941476 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1213 10:29:44.895488  941476 command_runner.go:130] > # pinned_images = [
	I1213 10:29:44.895491  941476 command_runner.go:130] > # ]
	I1213 10:29:44.895497  941476 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1213 10:29:44.895503  941476 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1213 10:29:44.895512  941476 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1213 10:29:44.895519  941476 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1213 10:29:44.895524  941476 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1213 10:29:44.895529  941476 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1213 10:29:44.895534  941476 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1213 10:29:44.895540  941476 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1213 10:29:44.895547  941476 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1213 10:29:44.895554  941476 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1213 10:29:44.895559  941476 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1213 10:29:44.895564  941476 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1213 10:29:44.895570  941476 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1213 10:29:44.895576  941476 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1213 10:29:44.895580  941476 command_runner.go:130] > # changing them here.
	I1213 10:29:44.895586  941476 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1213 10:29:44.895590  941476 command_runner.go:130] > # insecure_registries = [
	I1213 10:29:44.895592  941476 command_runner.go:130] > # ]
	I1213 10:29:44.895598  941476 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1213 10:29:44.895603  941476 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1213 10:29:44.895609  941476 command_runner.go:130] > # image_volumes = "mkdir"
	I1213 10:29:44.895614  941476 command_runner.go:130] > # Temporary directory to use for storing big files
	I1213 10:29:44.895618  941476 command_runner.go:130] > # big_files_temporary_dir = ""
	I1213 10:29:44.895623  941476 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1213 10:29:44.895630  941476 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1213 10:29:44.895634  941476 command_runner.go:130] > # auto_reload_registries = false
	I1213 10:29:44.895641  941476 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1213 10:29:44.895651  941476 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1213 10:29:44.895657  941476 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1213 10:29:44.895662  941476 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1213 10:29:44.895666  941476 command_runner.go:130] > # The mode of short name resolution.
	I1213 10:29:44.895672  941476 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1213 10:29:44.895679  941476 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1213 10:29:44.895684  941476 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1213 10:29:44.895688  941476 command_runner.go:130] > # short_name_mode = "enforcing"
	I1213 10:29:44.895697  941476 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1213 10:29:44.895704  941476 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1213 10:29:44.895708  941476 command_runner.go:130] > # oci_artifact_mount_support = true
	I1213 10:29:44.895715  941476 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1213 10:29:44.895718  941476 command_runner.go:130] > # CNI plugins.
	I1213 10:29:44.895721  941476 command_runner.go:130] > [crio.network]
	I1213 10:29:44.895727  941476 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1213 10:29:44.895732  941476 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1213 10:29:44.895735  941476 command_runner.go:130] > # cni_default_network = ""
	I1213 10:29:44.895741  941476 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1213 10:29:44.895745  941476 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1213 10:29:44.895751  941476 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1213 10:29:44.895754  941476 command_runner.go:130] > # plugin_dirs = [
	I1213 10:29:44.895758  941476 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1213 10:29:44.895760  941476 command_runner.go:130] > # ]
	I1213 10:29:44.895764  941476 command_runner.go:130] > # List of included pod metrics.
	I1213 10:29:44.895768  941476 command_runner.go:130] > # included_pod_metrics = [
	I1213 10:29:44.895771  941476 command_runner.go:130] > # ]
	I1213 10:29:44.895778  941476 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1213 10:29:44.895781  941476 command_runner.go:130] > [crio.metrics]
	I1213 10:29:44.895786  941476 command_runner.go:130] > # Globally enable or disable metrics support.
	I1213 10:29:44.895790  941476 command_runner.go:130] > # enable_metrics = false
	I1213 10:29:44.895794  941476 command_runner.go:130] > # Specify enabled metrics collectors.
	I1213 10:29:44.895799  941476 command_runner.go:130] > # Per default all metrics are enabled.
	I1213 10:29:44.895805  941476 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1213 10:29:44.895813  941476 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1213 10:29:44.895818  941476 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1213 10:29:44.895822  941476 command_runner.go:130] > # metrics_collectors = [
	I1213 10:29:44.895826  941476 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1213 10:29:44.895831  941476 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1213 10:29:44.895834  941476 command_runner.go:130] > # 	"containers_oom_total",
	I1213 10:29:44.895838  941476 command_runner.go:130] > # 	"processes_defunct",
	I1213 10:29:44.895842  941476 command_runner.go:130] > # 	"operations_total",
	I1213 10:29:44.895849  941476 command_runner.go:130] > # 	"operations_latency_seconds",
	I1213 10:29:44.895854  941476 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1213 10:29:44.895859  941476 command_runner.go:130] > # 	"operations_errors_total",
	I1213 10:29:44.895863  941476 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1213 10:29:44.895867  941476 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1213 10:29:44.895871  941476 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1213 10:29:44.895875  941476 command_runner.go:130] > # 	"image_pulls_success_total",
	I1213 10:29:44.895879  941476 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1213 10:29:44.895883  941476 command_runner.go:130] > # 	"containers_oom_count_total",
	I1213 10:29:44.895888  941476 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1213 10:29:44.895892  941476 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1213 10:29:44.895896  941476 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1213 10:29:44.895899  941476 command_runner.go:130] > # ]
	I1213 10:29:44.895905  941476 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1213 10:29:44.895908  941476 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1213 10:29:44.895913  941476 command_runner.go:130] > # The port on which the metrics server will listen.
	I1213 10:29:44.895917  941476 command_runner.go:130] > # metrics_port = 9090
	I1213 10:29:44.895922  941476 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1213 10:29:44.895925  941476 command_runner.go:130] > # metrics_socket = ""
	I1213 10:29:44.895930  941476 command_runner.go:130] > # The certificate for the secure metrics server.
	I1213 10:29:44.895937  941476 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1213 10:29:44.895943  941476 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1213 10:29:44.895947  941476 command_runner.go:130] > # certificate on any modification event.
	I1213 10:29:44.895951  941476 command_runner.go:130] > # metrics_cert = ""
	I1213 10:29:44.895955  941476 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1213 10:29:44.895960  941476 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1213 10:29:44.895963  941476 command_runner.go:130] > # metrics_key = ""
	I1213 10:29:44.895969  941476 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1213 10:29:44.895972  941476 command_runner.go:130] > [crio.tracing]
	I1213 10:29:44.895978  941476 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1213 10:29:44.895981  941476 command_runner.go:130] > # enable_tracing = false
	I1213 10:29:44.895987  941476 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1213 10:29:44.895991  941476 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1213 10:29:44.896000  941476 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1213 10:29:44.896007  941476 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1213 10:29:44.896011  941476 command_runner.go:130] > # CRI-O NRI configuration.
	I1213 10:29:44.896014  941476 command_runner.go:130] > [crio.nri]
	I1213 10:29:44.896018  941476 command_runner.go:130] > # Globally enable or disable NRI.
	I1213 10:29:44.896022  941476 command_runner.go:130] > # enable_nri = true
	I1213 10:29:44.896025  941476 command_runner.go:130] > # NRI socket to listen on.
	I1213 10:29:44.896030  941476 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1213 10:29:44.896034  941476 command_runner.go:130] > # NRI plugin directory to use.
	I1213 10:29:44.896038  941476 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1213 10:29:44.896043  941476 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1213 10:29:44.896051  941476 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1213 10:29:44.896057  941476 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1213 10:29:44.896113  941476 command_runner.go:130] > # nri_disable_connections = false
	I1213 10:29:44.896119  941476 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1213 10:29:44.896123  941476 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1213 10:29:44.896128  941476 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1213 10:29:44.896133  941476 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1213 10:29:44.896137  941476 command_runner.go:130] > # NRI default validator configuration.
	I1213 10:29:44.896144  941476 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1213 10:29:44.896150  941476 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1213 10:29:44.896155  941476 command_runner.go:130] > # can be restricted/rejected:
	I1213 10:29:44.896158  941476 command_runner.go:130] > # - OCI hook injection
	I1213 10:29:44.896163  941476 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1213 10:29:44.896167  941476 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1213 10:29:44.896172  941476 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1213 10:29:44.896176  941476 command_runner.go:130] > # - adjustment of linux namespaces
	I1213 10:29:44.896186  941476 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1213 10:29:44.896193  941476 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1213 10:29:44.896198  941476 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1213 10:29:44.896201  941476 command_runner.go:130] > #
	I1213 10:29:44.896205  941476 command_runner.go:130] > # [crio.nri.default_validator]
	I1213 10:29:44.896209  941476 command_runner.go:130] > # nri_enable_default_validator = false
	I1213 10:29:44.896218  941476 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1213 10:29:44.896223  941476 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1213 10:29:44.896229  941476 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1213 10:29:44.896234  941476 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1213 10:29:44.896239  941476 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1213 10:29:44.896243  941476 command_runner.go:130] > # nri_validator_required_plugins = [
	I1213 10:29:44.896245  941476 command_runner.go:130] > # ]
	I1213 10:29:44.896251  941476 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1213 10:29:44.896257  941476 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1213 10:29:44.896261  941476 command_runner.go:130] > [crio.stats]
	I1213 10:29:44.896267  941476 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1213 10:29:44.896272  941476 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1213 10:29:44.896276  941476 command_runner.go:130] > # stats_collection_period = 0
	I1213 10:29:44.896281  941476 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1213 10:29:44.896287  941476 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1213 10:29:44.896291  941476 command_runner.go:130] > # collection_period = 0
	I1213 10:29:44.896753  941476 command_runner.go:130] ! time="2025-12-13T10:29:44.865564739Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1213 10:29:44.896774  941476 command_runner.go:130] ! time="2025-12-13T10:29:44.865608538Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1213 10:29:44.896784  941476 command_runner.go:130] ! time="2025-12-13T10:29:44.865641285Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1213 10:29:44.896793  941476 command_runner.go:130] ! time="2025-12-13T10:29:44.86566636Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1213 10:29:44.896803  941476 command_runner.go:130] ! time="2025-12-13T10:29:44.865746328Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:29:44.896812  941476 command_runner.go:130] ! time="2025-12-13T10:29:44.866102466Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1213 10:29:44.896826  941476 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1213 10:29:44.896949  941476 cni.go:84] Creating CNI manager for ""
	I1213 10:29:44.896967  941476 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 10:29:44.896990  941476 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 10:29:44.897016  941476 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-200955 NodeName:functional-200955 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 10:29:44.897147  941476 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-200955"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 10:29:44.897221  941476 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 10:29:44.904800  941476 command_runner.go:130] > kubeadm
	I1213 10:29:44.904821  941476 command_runner.go:130] > kubectl
	I1213 10:29:44.904825  941476 command_runner.go:130] > kubelet
	I1213 10:29:44.905083  941476 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 10:29:44.905149  941476 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 10:29:44.912855  941476 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1213 10:29:44.926542  941476 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 10:29:44.940018  941476 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1213 10:29:44.953058  941476 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1213 10:29:44.956927  941476 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1213 10:29:44.957067  941476 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:29:45.090811  941476 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 10:29:45.111343  941476 certs.go:69] Setting up /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955 for IP: 192.168.49.2
	I1213 10:29:45.111425  941476 certs.go:195] generating shared ca certs ...
	I1213 10:29:45.111459  941476 certs.go:227] acquiring lock for ca certs: {Name:mk8a4f8a0a31c02fdf751ce601bdbbea6f5a03e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:29:45.111653  941476 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22128-904040/.minikube/ca.key
	I1213 10:29:45.111736  941476 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22128-904040/.minikube/proxy-client-ca.key
	I1213 10:29:45.111762  941476 certs.go:257] generating profile certs ...
	I1213 10:29:45.111936  941476 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/client.key
	I1213 10:29:45.112043  941476 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/apiserver.key.8da389ed
	I1213 10:29:45.112141  941476 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/proxy-client.key
	I1213 10:29:45.112183  941476 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-904040/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1213 10:29:45.112222  941476 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-904040/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1213 10:29:45.112262  941476 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-904040/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1213 10:29:45.112293  941476 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-904040/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1213 10:29:45.112328  941476 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1213 10:29:45.112371  941476 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1213 10:29:45.112404  941476 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1213 10:29:45.112444  941476 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1213 10:29:45.112521  941476 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/907484.pem (1338 bytes)
	W1213 10:29:45.112600  941476 certs.go:480] ignoring /home/jenkins/minikube-integration/22128-904040/.minikube/certs/907484_empty.pem, impossibly tiny 0 bytes
	I1213 10:29:45.112629  941476 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 10:29:45.112687  941476 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca.pem (1082 bytes)
	I1213 10:29:45.112733  941476 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/cert.pem (1123 bytes)
	I1213 10:29:45.112831  941476 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/key.pem (1675 bytes)
	I1213 10:29:45.113060  941476 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-904040/.minikube/files/etc/ssl/certs/9074842.pem (1708 bytes)
	I1213 10:29:45.113147  941476 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/907484.pem -> /usr/share/ca-certificates/907484.pem
	I1213 10:29:45.113186  941476 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-904040/.minikube/files/etc/ssl/certs/9074842.pem -> /usr/share/ca-certificates/9074842.pem
	I1213 10:29:45.113227  941476 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-904040/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:29:45.113935  941476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 10:29:45.163864  941476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 10:29:45.189286  941476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 10:29:45.237278  941476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 10:29:45.263467  941476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 10:29:45.289513  941476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 10:29:45.309018  941476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 10:29:45.329141  941476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 10:29:45.347665  941476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/certs/907484.pem --> /usr/share/ca-certificates/907484.pem (1338 bytes)
	I1213 10:29:45.365433  941476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/files/etc/ssl/certs/9074842.pem --> /usr/share/ca-certificates/9074842.pem (1708 bytes)
	I1213 10:29:45.383209  941476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 10:29:45.402144  941476 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 10:29:45.415520  941476 ssh_runner.go:195] Run: openssl version
	I1213 10:29:45.421431  941476 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1213 10:29:45.421939  941476 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/907484.pem
	I1213 10:29:45.429504  941476 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/907484.pem /etc/ssl/certs/907484.pem
	I1213 10:29:45.436991  941476 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/907484.pem
	I1213 10:29:45.440561  941476 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 13 10:21 /usr/share/ca-certificates/907484.pem
	I1213 10:29:45.440796  941476 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 10:21 /usr/share/ca-certificates/907484.pem
	I1213 10:29:45.440864  941476 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/907484.pem
	I1213 10:29:45.483791  941476 command_runner.go:130] > 51391683
	I1213 10:29:45.484209  941476 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 10:29:45.491520  941476 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9074842.pem
	I1213 10:29:45.498932  941476 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9074842.pem /etc/ssl/certs/9074842.pem
	I1213 10:29:45.509018  941476 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9074842.pem
	I1213 10:29:45.513215  941476 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 13 10:21 /usr/share/ca-certificates/9074842.pem
	I1213 10:29:45.513301  941476 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 10:21 /usr/share/ca-certificates/9074842.pem
	I1213 10:29:45.513386  941476 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9074842.pem
	I1213 10:29:45.554662  941476 command_runner.go:130] > 3ec20f2e
	I1213 10:29:45.555104  941476 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 10:29:45.562598  941476 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:29:45.570035  941476 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 10:29:45.578308  941476 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:29:45.582322  941476 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 13 10:11 /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:29:45.582399  941476 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 10:11 /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:29:45.582459  941476 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:29:45.623357  941476 command_runner.go:130] > b5213941
	I1213 10:29:45.623846  941476 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 10:29:45.631423  941476 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 10:29:45.635203  941476 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 10:29:45.635226  941476 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1213 10:29:45.635232  941476 command_runner.go:130] > Device: 259,1	Inode: 1052598     Links: 1
	I1213 10:29:45.635239  941476 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1213 10:29:45.635245  941476 command_runner.go:130] > Access: 2025-12-13 10:25:37.832562674 +0000
	I1213 10:29:45.635250  941476 command_runner.go:130] > Modify: 2025-12-13 10:21:33.766304384 +0000
	I1213 10:29:45.635255  941476 command_runner.go:130] > Change: 2025-12-13 10:21:33.766304384 +0000
	I1213 10:29:45.635260  941476 command_runner.go:130] >  Birth: 2025-12-13 10:21:33.766304384 +0000
	I1213 10:29:45.635337  941476 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 10:29:45.676331  941476 command_runner.go:130] > Certificate will not expire
	I1213 10:29:45.676780  941476 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 10:29:45.719984  941476 command_runner.go:130] > Certificate will not expire
	I1213 10:29:45.720440  941476 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 10:29:45.763044  941476 command_runner.go:130] > Certificate will not expire
	I1213 10:29:45.763152  941476 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 10:29:45.804752  941476 command_runner.go:130] > Certificate will not expire
	I1213 10:29:45.805187  941476 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 10:29:45.846806  941476 command_runner.go:130] > Certificate will not expire
	I1213 10:29:45.847203  941476 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 10:29:45.898203  941476 command_runner.go:130] > Certificate will not expire
	I1213 10:29:45.898680  941476 kubeadm.go:401] StartCluster: {Name:functional-200955 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-200955 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:29:45.898809  941476 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 10:29:45.898933  941476 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 10:29:45.924889  941476 cri.go:89] found id: ""
	I1213 10:29:45.924989  941476 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 10:29:45.932161  941476 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1213 10:29:45.932226  941476 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1213 10:29:45.932248  941476 command_runner.go:130] > /var/lib/minikube/etcd:
	I1213 10:29:45.933123  941476 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 10:29:45.933177  941476 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 10:29:45.933244  941476 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 10:29:45.940638  941476 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 10:29:45.941072  941476 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-200955" does not appear in /home/jenkins/minikube-integration/22128-904040/kubeconfig
	I1213 10:29:45.941185  941476 kubeconfig.go:62] /home/jenkins/minikube-integration/22128-904040/kubeconfig needs updating (will repair): [kubeconfig missing "functional-200955" cluster setting kubeconfig missing "functional-200955" context setting]
	I1213 10:29:45.941452  941476 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-904040/kubeconfig: {Name:mk623f80012ba74b924bdfcf4e2ec5178c2702f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:29:45.941955  941476 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22128-904040/kubeconfig
	I1213 10:29:45.942103  941476 kapi.go:59] client config for functional-200955: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/client.crt", KeyFile:"/home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/client.key", CAFile:"/home/jenkins/minikube-integration/22128-904040/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 10:29:45.942644  941476 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1213 10:29:45.942668  941476 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1213 10:29:45.942678  941476 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1213 10:29:45.942683  941476 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1213 10:29:45.942687  941476 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1213 10:29:45.942727  941476 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1213 10:29:45.943068  941476 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 10:29:45.951089  941476 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1213 10:29:45.951121  941476 kubeadm.go:602] duration metric: took 17.93243ms to restartPrimaryControlPlane
	I1213 10:29:45.951143  941476 kubeadm.go:403] duration metric: took 52.461003ms to StartCluster
	I1213 10:29:45.951159  941476 settings.go:142] acquiring lock: {Name:mk93988d167ba25bb331a8426f9b2f4ef25dd844 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:29:45.951223  941476 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22128-904040/kubeconfig
	I1213 10:29:45.951796  941476 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-904040/kubeconfig: {Name:mk623f80012ba74b924bdfcf4e2ec5178c2702f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:29:45.951989  941476 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 10:29:45.952368  941476 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 10:29:45.952448  941476 addons.go:70] Setting storage-provisioner=true in profile "functional-200955"
	I1213 10:29:45.952463  941476 addons.go:239] Setting addon storage-provisioner=true in "functional-200955"
	I1213 10:29:45.952488  941476 host.go:66] Checking if "functional-200955" exists ...
	I1213 10:29:45.952566  941476 config.go:182] Loaded profile config "functional-200955": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 10:29:45.952610  941476 addons.go:70] Setting default-storageclass=true in profile "functional-200955"
	I1213 10:29:45.952623  941476 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-200955"
	I1213 10:29:45.952911  941476 cli_runner.go:164] Run: docker container inspect functional-200955 --format={{.State.Status}}
	I1213 10:29:45.952951  941476 cli_runner.go:164] Run: docker container inspect functional-200955 --format={{.State.Status}}
	I1213 10:29:45.958523  941476 out.go:179] * Verifying Kubernetes components...
	I1213 10:29:45.963377  941476 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:29:45.989193  941476 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22128-904040/kubeconfig
	I1213 10:29:45.989357  941476 kapi.go:59] client config for functional-200955: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/client.crt", KeyFile:"/home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/client.key", CAFile:"/home/jenkins/minikube-integration/22128-904040/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 10:29:45.989643  941476 addons.go:239] Setting addon default-storageclass=true in "functional-200955"
	I1213 10:29:45.989674  941476 host.go:66] Checking if "functional-200955" exists ...
	I1213 10:29:45.990084  941476 cli_runner.go:164] Run: docker container inspect functional-200955 --format={{.State.Status}}
	I1213 10:29:45.996374  941476 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 10:29:45.999301  941476 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:29:45.999325  941476 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 10:29:45.999389  941476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-200955
	I1213 10:29:46.025120  941476 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 10:29:46.025146  941476 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 10:29:46.025210  941476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-200955
	I1213 10:29:46.047237  941476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33523 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/functional-200955/id_rsa Username:docker}
	I1213 10:29:46.065614  941476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33523 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/functional-200955/id_rsa Username:docker}
	I1213 10:29:46.182514  941476 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 10:29:46.188367  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:29:46.228034  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:29:46.975760  941476 node_ready.go:35] waiting up to 6m0s for node "functional-200955" to be "Ready" ...
	I1213 10:29:46.975884  941476 type.go:168] "Request Body" body=""
	I1213 10:29:46.975940  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:46.976159  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:29:46.976214  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:46.976242  941476 retry.go:31] will retry after 310.714541ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:46.976276  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:29:46.976296  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:46.976306  941476 retry.go:31] will retry after 212.322267ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:46.976367  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:29:47.188794  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:29:47.245508  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:29:47.249207  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:47.249253  941476 retry.go:31] will retry after 232.449188ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:47.287510  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:29:47.352377  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:29:47.355988  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:47.356022  941476 retry.go:31] will retry after 216.845813ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:47.476461  941476 type.go:168] "Request Body" body=""
	I1213 10:29:47.476540  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:47.476866  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:29:47.482125  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:29:47.540633  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:29:47.540674  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:47.540713  941476 retry.go:31] will retry after 621.150122ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:47.573847  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:29:47.632148  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:29:47.632198  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:47.632239  941476 retry.go:31] will retry after 652.105841ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:47.976625  941476 type.go:168] "Request Body" body=""
	I1213 10:29:47.976714  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:47.977047  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:29:48.162374  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:29:48.224014  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:29:48.224050  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:48.224096  941476 retry.go:31] will retry after 486.360631ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:48.285241  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:29:48.341512  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:29:48.345196  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:48.345232  941476 retry.go:31] will retry after 851.054667ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:48.476501  941476 type.go:168] "Request Body" body=""
	I1213 10:29:48.476654  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:48.477264  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:29:48.710766  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:29:48.774597  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:29:48.774656  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:48.774677  941476 retry.go:31] will retry after 1.42902923s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:48.976030  941476 type.go:168] "Request Body" body=""
	I1213 10:29:48.976124  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:48.976473  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:29:48.976568  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:29:49.197102  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:29:49.269601  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:29:49.269709  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:49.269757  941476 retry.go:31] will retry after 1.296706305s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:49.476109  941476 type.go:168] "Request Body" body=""
	I1213 10:29:49.476203  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:49.476573  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:29:49.976081  941476 type.go:168] "Request Body" body=""
	I1213 10:29:49.976179  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:49.976442  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:29:50.204048  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:29:50.263787  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:29:50.263835  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:50.263857  941476 retry.go:31] will retry after 2.257067811s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:50.476081  941476 type.go:168] "Request Body" body=""
	I1213 10:29:50.476171  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:50.476455  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:29:50.566907  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:29:50.629271  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:29:50.629314  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:50.629333  941476 retry.go:31] will retry after 1.765407868s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:50.976841  941476 type.go:168] "Request Body" body=""
	I1213 10:29:50.976923  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:50.977217  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:29:50.977269  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:29:51.475933  941476 type.go:168] "Request Body" body=""
	I1213 10:29:51.476012  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:51.476290  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:29:51.976028  941476 type.go:168] "Request Body" body=""
	I1213 10:29:51.976124  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:51.976454  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:29:52.395020  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:29:52.456823  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:29:52.456875  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:52.456899  941476 retry.go:31] will retry after 1.561909689s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:52.476063  941476 type.go:168] "Request Body" body=""
	I1213 10:29:52.476147  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:52.476449  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:29:52.521915  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:29:52.578203  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:29:52.581870  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:52.581904  941476 retry.go:31] will retry after 3.834800834s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:52.976296  941476 type.go:168] "Request Body" body=""
	I1213 10:29:52.976371  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:52.976640  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:29:53.476019  941476 type.go:168] "Request Body" body=""
	I1213 10:29:53.476103  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:53.476429  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:29:53.476481  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:29:53.976156  941476 type.go:168] "Request Body" body=""
	I1213 10:29:53.976238  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:53.976665  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:29:54.019913  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:29:54.081795  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:29:54.081851  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:54.081875  941476 retry.go:31] will retry after 4.858817388s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:54.476105  941476 type.go:168] "Request Body" body=""
	I1213 10:29:54.476182  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:54.476432  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:29:54.976004  941476 type.go:168] "Request Body" body=""
	I1213 10:29:54.976093  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:54.976415  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:29:55.476129  941476 type.go:168] "Request Body" body=""
	I1213 10:29:55.476226  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:55.476527  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:29:55.476588  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:29:55.976456  941476 type.go:168] "Request Body" body=""
	I1213 10:29:55.976520  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:55.976761  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:29:56.417572  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:29:56.476035  941476 type.go:168] "Request Body" body=""
	I1213 10:29:56.476118  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:56.476423  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:56.476511  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:29:56.480436  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:56.480483  941476 retry.go:31] will retry after 4.792687173s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:56.976051  941476 type.go:168] "Request Body" body=""
	I1213 10:29:56.976145  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:56.976494  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:29:57.475977  941476 type.go:168] "Request Body" body=""
	I1213 10:29:57.476051  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:57.476378  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:29:57.976104  941476 type.go:168] "Request Body" body=""
	I1213 10:29:57.976249  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:57.976601  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:29:57.976655  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:29:58.476178  941476 type.go:168] "Request Body" body=""
	I1213 10:29:58.476277  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:58.476612  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:29:58.940954  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:29:58.976372  941476 type.go:168] "Request Body" body=""
	I1213 10:29:58.976458  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:58.976716  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:29:59.010699  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:29:59.010740  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:59.010759  941476 retry.go:31] will retry after 7.734765537s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:29:59.476520  941476 type.go:168] "Request Body" body=""
	I1213 10:29:59.476594  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:59.476930  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:29:59.976794  941476 type.go:168] "Request Body" body=""
	I1213 10:29:59.976872  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:29:59.977198  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:29:59.977252  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:00.476972  941476 type.go:168] "Request Body" body=""
	I1213 10:30:00.477066  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:00.477383  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:00.976114  941476 type.go:168] "Request Body" body=""
	I1213 10:30:00.976196  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:00.976547  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:01.274155  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:30:01.347774  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:30:01.347813  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:30:01.347834  941476 retry.go:31] will retry after 9.325183697s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:30:01.478515  941476 type.go:168] "Request Body" body=""
	I1213 10:30:01.478628  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:01.479014  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:01.976839  941476 type.go:168] "Request Body" body=""
	I1213 10:30:01.976947  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:01.977331  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:01.977404  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:02.476030  941476 type.go:168] "Request Body" body=""
	I1213 10:30:02.476139  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:02.476537  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:02.976170  941476 type.go:168] "Request Body" body=""
	I1213 10:30:02.976275  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:02.976649  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:03.476192  941476 type.go:168] "Request Body" body=""
	I1213 10:30:03.476276  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:03.476538  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:03.976228  941476 type.go:168] "Request Body" body=""
	I1213 10:30:03.976352  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:03.976726  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:04.476318  941476 type.go:168] "Request Body" body=""
	I1213 10:30:04.476410  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:04.476740  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:04.476799  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:04.976561  941476 type.go:168] "Request Body" body=""
	I1213 10:30:04.976631  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:04.976878  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:05.476699  941476 type.go:168] "Request Body" body=""
	I1213 10:30:05.476787  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:05.477120  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:05.977016  941476 type.go:168] "Request Body" body=""
	I1213 10:30:05.977144  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:05.977510  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:06.475991  941476 type.go:168] "Request Body" body=""
	I1213 10:30:06.476060  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:06.476330  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:06.746112  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:30:06.805144  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:30:06.808651  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:30:06.808685  941476 retry.go:31] will retry after 7.088599712s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:30:06.976026  941476 type.go:168] "Request Body" body=""
	I1213 10:30:06.976116  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:06.976437  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:06.976507  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:07.476202  941476 type.go:168] "Request Body" body=""
	I1213 10:30:07.476279  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:07.476634  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:07.976084  941476 type.go:168] "Request Body" body=""
	I1213 10:30:07.976170  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:07.976444  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:08.476024  941476 type.go:168] "Request Body" body=""
	I1213 10:30:08.476153  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:08.476482  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:08.976213  941476 type.go:168] "Request Body" body=""
	I1213 10:30:08.976308  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:08.976642  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:08.976701  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:09.476115  941476 type.go:168] "Request Body" body=""
	I1213 10:30:09.476212  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:09.476464  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:09.976032  941476 type.go:168] "Request Body" body=""
	I1213 10:30:09.976107  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:09.976492  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:10.476265  941476 type.go:168] "Request Body" body=""
	I1213 10:30:10.476368  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:10.476715  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:10.673230  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:30:10.732312  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:30:10.736051  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:30:10.736087  941476 retry.go:31] will retry after 8.123592788s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:30:10.976475  941476 type.go:168] "Request Body" body=""
	I1213 10:30:10.976550  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:10.976847  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:10.976888  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:11.476725  941476 type.go:168] "Request Body" body=""
	I1213 10:30:11.476822  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:11.477169  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:11.976044  941476 type.go:168] "Request Body" body=""
	I1213 10:30:11.976120  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:11.976458  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:12.476202  941476 type.go:168] "Request Body" body=""
	I1213 10:30:12.476278  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:12.476542  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:12.976059  941476 type.go:168] "Request Body" body=""
	I1213 10:30:12.976141  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:12.976473  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:13.476058  941476 type.go:168] "Request Body" body=""
	I1213 10:30:13.476137  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:13.476490  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:13.476548  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:13.898101  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:30:13.964340  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:30:13.967836  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:30:13.967879  941476 retry.go:31] will retry after 8.492520723s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:30:13.975991  941476 type.go:168] "Request Body" body=""
	I1213 10:30:13.976068  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:13.976327  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:14.476033  941476 type.go:168] "Request Body" body=""
	I1213 10:30:14.476139  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:14.476442  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:14.976067  941476 type.go:168] "Request Body" body=""
	I1213 10:30:14.976142  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:14.976454  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:15.475986  941476 type.go:168] "Request Body" body=""
	I1213 10:30:15.476080  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:15.476459  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:15.975941  941476 type.go:168] "Request Body" body=""
	I1213 10:30:15.976026  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:15.976392  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:15.976452  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:16.476065  941476 type.go:168] "Request Body" body=""
	I1213 10:30:16.476159  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:16.476450  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:16.975992  941476 type.go:168] "Request Body" body=""
	I1213 10:30:16.976102  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:16.976412  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:17.476049  941476 type.go:168] "Request Body" body=""
	I1213 10:30:17.476174  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:17.476445  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:17.976100  941476 type.go:168] "Request Body" body=""
	I1213 10:30:17.976180  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:17.976600  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:17.976654  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:18.475986  941476 type.go:168] "Request Body" body=""
	I1213 10:30:18.476079  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:18.476393  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:18.859953  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:30:18.916800  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:30:18.920763  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:30:18.920813  941476 retry.go:31] will retry after 11.17407044s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:30:18.976006  941476 type.go:168] "Request Body" body=""
	I1213 10:30:18.976089  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:18.976434  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:19.476057  941476 type.go:168] "Request Body" body=""
	I1213 10:30:19.476156  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:19.476511  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:19.975977  941476 type.go:168] "Request Body" body=""
	I1213 10:30:19.976055  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:19.976310  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:20.476019  941476 type.go:168] "Request Body" body=""
	I1213 10:30:20.476128  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:20.476491  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:20.476556  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:20.976222  941476 type.go:168] "Request Body" body=""
	I1213 10:30:20.976298  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:20.976627  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:21.476132  941476 type.go:168] "Request Body" body=""
	I1213 10:30:21.476230  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:21.476520  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:21.976457  941476 type.go:168] "Request Body" body=""
	I1213 10:30:21.976534  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:21.976932  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:22.460571  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:30:22.476131  941476 type.go:168] "Request Body" body=""
	I1213 10:30:22.476203  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:22.476465  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:22.521379  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:30:22.525059  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:30:22.525092  941476 retry.go:31] will retry after 25.139993985s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:30:22.976652  941476 type.go:168] "Request Body" body=""
	I1213 10:30:22.976730  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:22.976986  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:22.977026  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:23.476843  941476 type.go:168] "Request Body" body=""
	I1213 10:30:23.476919  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:23.477283  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:23.975970  941476 type.go:168] "Request Body" body=""
	I1213 10:30:23.976053  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:23.976449  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:24.476161  941476 type.go:168] "Request Body" body=""
	I1213 10:30:24.476245  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:24.476513  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:24.976034  941476 type.go:168] "Request Body" body=""
	I1213 10:30:24.976147  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:24.976481  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:25.476274  941476 type.go:168] "Request Body" body=""
	I1213 10:30:25.476347  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:25.476670  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:25.476736  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:25.976627  941476 type.go:168] "Request Body" body=""
	I1213 10:30:25.976707  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:25.976951  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:26.476709  941476 type.go:168] "Request Body" body=""
	I1213 10:30:26.476781  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:26.477095  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:26.976010  941476 type.go:168] "Request Body" body=""
	I1213 10:30:26.976085  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:26.976390  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:27.475998  941476 type.go:168] "Request Body" body=""
	I1213 10:30:27.476197  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:27.476524  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:27.976149  941476 type.go:168] "Request Body" body=""
	I1213 10:30:27.976232  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:27.976587  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:27.976691  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:28.476061  941476 type.go:168] "Request Body" body=""
	I1213 10:30:28.476140  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:28.476466  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:28.975994  941476 type.go:168] "Request Body" body=""
	I1213 10:30:28.976062  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:28.976382  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:29.475998  941476 type.go:168] "Request Body" body=""
	I1213 10:30:29.476091  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:29.476426  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:29.976195  941476 type.go:168] "Request Body" body=""
	I1213 10:30:29.976285  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:29.976645  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:30.096045  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:30:30.160844  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:30:30.160891  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:30:30.160917  941476 retry.go:31] will retry after 23.835716192s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:30:30.476291  941476 type.go:168] "Request Body" body=""
	I1213 10:30:30.476381  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:30.476623  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:30.476662  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:30.976005  941476 type.go:168] "Request Body" body=""
	I1213 10:30:30.976079  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:30.976448  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:31.475993  941476 type.go:168] "Request Body" body=""
	I1213 10:30:31.476105  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:31.476447  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:31.976396  941476 type.go:168] "Request Body" body=""
	I1213 10:30:31.976460  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:31.976719  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:32.476555  941476 type.go:168] "Request Body" body=""
	I1213 10:30:32.476640  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:32.476947  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:32.476999  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:32.976734  941476 type.go:168] "Request Body" body=""
	I1213 10:30:32.976812  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:32.977150  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:33.476870  941476 type.go:168] "Request Body" body=""
	I1213 10:30:33.476937  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:33.477226  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:33.975973  941476 type.go:168] "Request Body" body=""
	I1213 10:30:33.976043  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:33.976419  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:34.476027  941476 type.go:168] "Request Body" body=""
	I1213 10:30:34.476101  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:34.476510  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:34.975996  941476 type.go:168] "Request Body" body=""
	I1213 10:30:34.976068  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:34.976382  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:34.976435  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:35.476032  941476 type.go:168] "Request Body" body=""
	I1213 10:30:35.476153  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:35.476480  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:35.975911  941476 type.go:168] "Request Body" body=""
	I1213 10:30:35.975996  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:35.976291  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:36.475989  941476 type.go:168] "Request Body" body=""
	I1213 10:30:36.476057  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:36.476387  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:36.976488  941476 type.go:168] "Request Body" body=""
	I1213 10:30:36.976570  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:36.976951  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:36.977012  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:37.476783  941476 type.go:168] "Request Body" body=""
	I1213 10:30:37.476896  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:37.477216  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:37.975936  941476 type.go:168] "Request Body" body=""
	I1213 10:30:37.976015  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:37.976268  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:38.476001  941476 type.go:168] "Request Body" body=""
	I1213 10:30:38.476091  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:38.476424  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:38.976022  941476 type.go:168] "Request Body" body=""
	I1213 10:30:38.976096  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:38.976428  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:39.476003  941476 type.go:168] "Request Body" body=""
	I1213 10:30:39.476084  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:39.476362  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:39.476402  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:39.975986  941476 type.go:168] "Request Body" body=""
	I1213 10:30:39.976076  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:39.976383  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:40.476036  941476 type.go:168] "Request Body" body=""
	I1213 10:30:40.476132  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:40.476454  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:40.976176  941476 type.go:168] "Request Body" body=""
	I1213 10:30:40.976252  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:40.976500  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:41.476026  941476 type.go:168] "Request Body" body=""
	I1213 10:30:41.476124  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:41.476456  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:41.476514  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:41.976469  941476 type.go:168] "Request Body" body=""
	I1213 10:30:41.976585  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:41.976895  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:42.476663  941476 type.go:168] "Request Body" body=""
	I1213 10:30:42.476728  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:42.477006  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:42.976839  941476 type.go:168] "Request Body" body=""
	I1213 10:30:42.976919  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:42.980297  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1213 10:30:43.476086  941476 type.go:168] "Request Body" body=""
	I1213 10:30:43.476186  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:43.476547  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:43.476623  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:43.976205  941476 type.go:168] "Request Body" body=""
	I1213 10:30:43.976276  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:43.976547  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:44.476019  941476 type.go:168] "Request Body" body=""
	I1213 10:30:44.476113  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:44.476466  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:44.976036  941476 type.go:168] "Request Body" body=""
	I1213 10:30:44.976111  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:44.976440  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:45.475987  941476 type.go:168] "Request Body" body=""
	I1213 10:30:45.476056  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:45.476331  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:45.975925  941476 type.go:168] "Request Body" body=""
	I1213 10:30:45.976003  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:45.976327  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:45.976382  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:46.476024  941476 type.go:168] "Request Body" body=""
	I1213 10:30:46.476103  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:46.476455  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:46.976213  941476 type.go:168] "Request Body" body=""
	I1213 10:30:46.976285  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:46.976538  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:47.475994  941476 type.go:168] "Request Body" body=""
	I1213 10:30:47.476069  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:47.476399  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:47.665860  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:30:47.731394  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:30:47.731441  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:30:47.731460  941476 retry.go:31] will retry after 19.194003802s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:30:47.975899  941476 type.go:168] "Request Body" body=""
	I1213 10:30:47.975974  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:47.976303  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:48.475998  941476 type.go:168] "Request Body" body=""
	I1213 10:30:48.476084  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:48.476410  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:48.476469  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:48.976020  941476 type.go:168] "Request Body" body=""
	I1213 10:30:48.976114  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:48.976440  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:49.476036  941476 type.go:168] "Request Body" body=""
	I1213 10:30:49.476112  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:49.476434  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:49.976100  941476 type.go:168] "Request Body" body=""
	I1213 10:30:49.976167  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:49.976437  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:50.476044  941476 type.go:168] "Request Body" body=""
	I1213 10:30:50.476115  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:50.476434  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:50.976044  941476 type.go:168] "Request Body" body=""
	I1213 10:30:50.976126  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:50.976458  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:50.976519  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:51.476190  941476 type.go:168] "Request Body" body=""
	I1213 10:30:51.476257  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:51.476607  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:51.976524  941476 type.go:168] "Request Body" body=""
	I1213 10:30:51.976618  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:51.976938  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:52.476676  941476 type.go:168] "Request Body" body=""
	I1213 10:30:52.476768  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:52.477095  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:52.976699  941476 type.go:168] "Request Body" body=""
	I1213 10:30:52.976774  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:52.977061  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:52.977104  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:53.476895  941476 type.go:168] "Request Body" body=""
	I1213 10:30:53.476971  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:53.477260  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:53.975931  941476 type.go:168] "Request Body" body=""
	I1213 10:30:53.976008  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:53.976338  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:53.997712  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:30:54.059604  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:30:54.063660  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:30:54.063694  941476 retry.go:31] will retry after 30.126310408s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:30:54.475958  941476 type.go:168] "Request Body" body=""
	I1213 10:30:54.476070  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:54.476392  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:54.976060  941476 type.go:168] "Request Body" body=""
	I1213 10:30:54.976148  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:54.976488  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:55.476185  941476 type.go:168] "Request Body" body=""
	I1213 10:30:55.476260  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:55.476583  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:55.476642  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:55.976527  941476 type.go:168] "Request Body" body=""
	I1213 10:30:55.976599  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:55.976860  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:56.476675  941476 type.go:168] "Request Body" body=""
	I1213 10:30:56.476769  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:56.477141  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:56.976045  941476 type.go:168] "Request Body" body=""
	I1213 10:30:56.976119  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:56.976449  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:57.476156  941476 type.go:168] "Request Body" body=""
	I1213 10:30:57.476236  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:57.476486  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:57.976022  941476 type.go:168] "Request Body" body=""
	I1213 10:30:57.976100  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:57.976440  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:57.976502  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:30:58.476042  941476 type.go:168] "Request Body" body=""
	I1213 10:30:58.476124  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:58.476455  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:58.976150  941476 type.go:168] "Request Body" body=""
	I1213 10:30:58.976235  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:58.976490  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:59.476182  941476 type.go:168] "Request Body" body=""
	I1213 10:30:59.476288  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:59.476621  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:30:59.976365  941476 type.go:168] "Request Body" body=""
	I1213 10:30:59.976444  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:30:59.976775  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:30:59.976845  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:00.476639  941476 type.go:168] "Request Body" body=""
	I1213 10:31:00.476719  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:00.477025  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:00.976827  941476 type.go:168] "Request Body" body=""
	I1213 10:31:00.976918  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:00.977328  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:01.475937  941476 type.go:168] "Request Body" body=""
	I1213 10:31:01.476035  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:01.476377  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:01.976053  941476 type.go:168] "Request Body" body=""
	I1213 10:31:01.976138  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:01.976399  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:02.476026  941476 type.go:168] "Request Body" body=""
	I1213 10:31:02.476102  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:02.476453  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:02.476508  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:02.976184  941476 type.go:168] "Request Body" body=""
	I1213 10:31:02.976261  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:02.976604  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:03.476321  941476 type.go:168] "Request Body" body=""
	I1213 10:31:03.476405  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:03.476656  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:03.975989  941476 type.go:168] "Request Body" body=""
	I1213 10:31:03.976062  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:03.976373  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:04.476027  941476 type.go:168] "Request Body" body=""
	I1213 10:31:04.476107  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:04.476440  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:04.976145  941476 type.go:168] "Request Body" body=""
	I1213 10:31:04.976215  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:04.976528  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:04.976587  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:05.476046  941476 type.go:168] "Request Body" body=""
	I1213 10:31:05.476128  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:05.476503  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:05.976329  941476 type.go:168] "Request Body" body=""
	I1213 10:31:05.976404  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:05.976818  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:06.476644  941476 type.go:168] "Request Body" body=""
	I1213 10:31:06.476727  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:06.476990  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:06.925824  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:31:06.976406  941476 type.go:168] "Request Body" body=""
	I1213 10:31:06.976485  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:06.976757  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:06.976800  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:06.991385  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:06.991438  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:06.991540  941476 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 10:31:07.476000  941476 type.go:168] "Request Body" body=""
	I1213 10:31:07.476110  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:07.476475  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:07.976033  941476 type.go:168] "Request Body" body=""
	I1213 10:31:07.976116  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:07.976413  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:08.476065  941476 type.go:168] "Request Body" body=""
	I1213 10:31:08.476162  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:08.476480  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:08.976217  941476 type.go:168] "Request Body" body=""
	I1213 10:31:08.976318  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:08.976675  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:09.476351  941476 type.go:168] "Request Body" body=""
	I1213 10:31:09.476424  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:09.476761  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:09.476820  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:09.976571  941476 type.go:168] "Request Body" body=""
	I1213 10:31:09.976678  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:09.977059  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:10.476721  941476 type.go:168] "Request Body" body=""
	I1213 10:31:10.476799  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:10.477208  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:10.975925  941476 type.go:168] "Request Body" body=""
	I1213 10:31:10.975997  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:10.976250  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:11.475973  941476 type.go:168] "Request Body" body=""
	I1213 10:31:11.476050  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:11.476395  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:11.976476  941476 type.go:168] "Request Body" body=""
	I1213 10:31:11.976551  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:11.976955  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:11.977016  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:12.476754  941476 type.go:168] "Request Body" body=""
	I1213 10:31:12.476839  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:12.477117  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:12.976506  941476 type.go:168] "Request Body" body=""
	I1213 10:31:12.976583  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:12.976915  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:13.476748  941476 type.go:168] "Request Body" body=""
	I1213 10:31:13.476846  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:13.477198  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:13.975894  941476 type.go:168] "Request Body" body=""
	I1213 10:31:13.975961  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:13.976227  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:14.475943  941476 type.go:168] "Request Body" body=""
	I1213 10:31:14.476062  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:14.476400  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:14.476469  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:14.976011  941476 type.go:168] "Request Body" body=""
	I1213 10:31:14.976112  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:14.976509  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:15.476219  941476 type.go:168] "Request Body" body=""
	I1213 10:31:15.476292  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:15.476567  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:15.976650  941476 type.go:168] "Request Body" body=""
	I1213 10:31:15.976734  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:15.977073  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:16.476854  941476 type.go:168] "Request Body" body=""
	I1213 10:31:16.476948  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:16.477273  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:16.477330  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:16.976000  941476 type.go:168] "Request Body" body=""
	I1213 10:31:16.976073  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:16.976427  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:17.476545  941476 type.go:168] "Request Body" body=""
	I1213 10:31:17.476677  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:17.477181  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:17.976852  941476 type.go:168] "Request Body" body=""
	I1213 10:31:17.976935  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:17.977261  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:18.475978  941476 type.go:168] "Request Body" body=""
	I1213 10:31:18.476056  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:18.476322  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:18.976069  941476 type.go:168] "Request Body" body=""
	I1213 10:31:18.976149  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:18.976500  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:18.976571  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:19.476245  941476 type.go:168] "Request Body" body=""
	I1213 10:31:19.476328  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:19.476669  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:19.976355  941476 type.go:168] "Request Body" body=""
	I1213 10:31:19.976423  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:19.976681  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:20.476070  941476 type.go:168] "Request Body" body=""
	I1213 10:31:20.476146  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:20.476464  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:20.976239  941476 type.go:168] "Request Body" body=""
	I1213 10:31:20.976313  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:20.976664  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:20.976722  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:21.476108  941476 type.go:168] "Request Body" body=""
	I1213 10:31:21.476196  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:21.476546  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:21.976459  941476 type.go:168] "Request Body" body=""
	I1213 10:31:21.976535  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:21.976854  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:22.476728  941476 type.go:168] "Request Body" body=""
	I1213 10:31:22.476820  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:22.477138  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:22.976866  941476 type.go:168] "Request Body" body=""
	I1213 10:31:22.976937  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:22.977188  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:22.977229  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:23.475910  941476 type.go:168] "Request Body" body=""
	I1213 10:31:23.475992  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:23.476337  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:23.976033  941476 type.go:168] "Request Body" body=""
	I1213 10:31:23.976146  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:23.976483  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:24.190915  941476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:31:24.248888  941476 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:24.248934  941476 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:31:24.249045  941476 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 10:31:24.254122  941476 out.go:179] * Enabled addons: 
	I1213 10:31:24.256914  941476 addons.go:530] duration metric: took 1m38.304545325s for enable addons: enabled=[]
	I1213 10:31:24.476214  941476 type.go:168] "Request Body" body=""
	I1213 10:31:24.476305  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:24.476571  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:24.976075  941476 type.go:168] "Request Body" body=""
	I1213 10:31:24.976150  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:24.976469  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:25.475994  941476 type.go:168] "Request Body" body=""
	I1213 10:31:25.476100  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:25.476424  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:25.476482  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:25.976304  941476 type.go:168] "Request Body" body=""
	I1213 10:31:25.976372  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:25.976622  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:26.476058  941476 type.go:168] "Request Body" body=""
	I1213 10:31:26.476134  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:26.476464  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:26.976032  941476 type.go:168] "Request Body" body=""
	I1213 10:31:26.976107  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:26.976412  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:27.475988  941476 type.go:168] "Request Body" body=""
	I1213 10:31:27.476056  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:27.476317  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:27.976108  941476 type.go:168] "Request Body" body=""
	I1213 10:31:27.976196  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:27.976535  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:27.976591  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:28.476254  941476 type.go:168] "Request Body" body=""
	I1213 10:31:28.476381  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:28.476716  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:28.975973  941476 type.go:168] "Request Body" body=""
	I1213 10:31:28.976047  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:28.976353  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:29.476048  941476 type.go:168] "Request Body" body=""
	I1213 10:31:29.476126  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:29.476474  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:29.976170  941476 type.go:168] "Request Body" body=""
	I1213 10:31:29.976247  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:29.976617  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:29.976678  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:30.476323  941476 type.go:168] "Request Body" body=""
	I1213 10:31:30.476391  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:30.476664  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:30.976054  941476 type.go:168] "Request Body" body=""
	I1213 10:31:30.976128  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:30.976456  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:31.476168  941476 type.go:168] "Request Body" body=""
	I1213 10:31:31.476269  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:31.476567  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:31.976505  941476 type.go:168] "Request Body" body=""
	I1213 10:31:31.976574  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:31.976850  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:31.976891  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:32.476715  941476 type.go:168] "Request Body" body=""
	I1213 10:31:32.476794  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:32.477154  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:32.976964  941476 type.go:168] "Request Body" body=""
	I1213 10:31:32.977041  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:32.977388  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:33.476013  941476 type.go:168] "Request Body" body=""
	I1213 10:31:33.476079  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:33.476329  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:33.976034  941476 type.go:168] "Request Body" body=""
	I1213 10:31:33.976119  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:33.976457  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:34.476014  941476 type.go:168] "Request Body" body=""
	I1213 10:31:34.476100  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:34.476438  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:34.476494  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:34.976011  941476 type.go:168] "Request Body" body=""
	I1213 10:31:34.976087  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:34.976342  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:35.476018  941476 type.go:168] "Request Body" body=""
	I1213 10:31:35.476143  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:35.476462  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:35.976397  941476 type.go:168] "Request Body" body=""
	I1213 10:31:35.976481  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:35.976852  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:36.476416  941476 type.go:168] "Request Body" body=""
	I1213 10:31:36.476490  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:36.476745  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:36.476785  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:36.976682  941476 type.go:168] "Request Body" body=""
	I1213 10:31:36.976776  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:36.977178  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:37.476965  941476 type.go:168] "Request Body" body=""
	I1213 10:31:37.477045  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:37.477383  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:37.976029  941476 type.go:168] "Request Body" body=""
	I1213 10:31:37.976095  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:37.976361  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:38.476025  941476 type.go:168] "Request Body" body=""
	I1213 10:31:38.476107  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:38.476445  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:38.975981  941476 type.go:168] "Request Body" body=""
	I1213 10:31:38.976069  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:38.976409  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:38.976469  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:39.476151  941476 type.go:168] "Request Body" body=""
	I1213 10:31:39.476225  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:39.476508  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:39.976053  941476 type.go:168] "Request Body" body=""
	I1213 10:31:39.976130  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:39.976448  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:40.476042  941476 type.go:168] "Request Body" body=""
	I1213 10:31:40.476119  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:40.476446  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:40.976091  941476 type.go:168] "Request Body" body=""
	I1213 10:31:40.976170  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:40.976430  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:41.476048  941476 type.go:168] "Request Body" body=""
	I1213 10:31:41.476125  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:41.476626  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:41.476675  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:41.976630  941476 type.go:168] "Request Body" body=""
	I1213 10:31:41.976743  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:41.977553  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:42.475972  941476 type.go:168] "Request Body" body=""
	I1213 10:31:42.476061  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:42.476364  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:42.976014  941476 type.go:168] "Request Body" body=""
	I1213 10:31:42.976100  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:42.976440  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:43.476024  941476 type.go:168] "Request Body" body=""
	I1213 10:31:43.476107  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:43.476429  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:43.975985  941476 type.go:168] "Request Body" body=""
	I1213 10:31:43.976054  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:43.976344  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:43.976397  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:44.476016  941476 type.go:168] "Request Body" body=""
	I1213 10:31:44.476093  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:44.476411  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:44.976030  941476 type.go:168] "Request Body" body=""
	I1213 10:31:44.976151  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:44.976503  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:45.476045  941476 type.go:168] "Request Body" body=""
	I1213 10:31:45.476120  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:45.476386  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:45.976018  941476 type.go:168] "Request Body" body=""
	I1213 10:31:45.976092  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:45.976393  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:45.976440  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:46.476013  941476 type.go:168] "Request Body" body=""
	I1213 10:31:46.476094  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:46.476429  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:46.975976  941476 type.go:168] "Request Body" body=""
	I1213 10:31:46.976048  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:46.976402  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:47.476027  941476 type.go:168] "Request Body" body=""
	I1213 10:31:47.476109  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:47.476419  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:47.976020  941476 type.go:168] "Request Body" body=""
	I1213 10:31:47.976095  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:47.976422  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:47.976480  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:48.476004  941476 type.go:168] "Request Body" body=""
	I1213 10:31:48.476083  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:48.476391  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:48.976026  941476 type.go:168] "Request Body" body=""
	I1213 10:31:48.976109  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:48.976439  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:49.476029  941476 type.go:168] "Request Body" body=""
	I1213 10:31:49.476115  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:49.476450  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:49.976130  941476 type.go:168] "Request Body" body=""
	I1213 10:31:49.976202  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:49.976477  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:49.976519  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:50.476169  941476 type.go:168] "Request Body" body=""
	I1213 10:31:50.476246  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:50.476586  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:50.976287  941476 type.go:168] "Request Body" body=""
	I1213 10:31:50.976360  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:50.976729  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:51.476495  941476 type.go:168] "Request Body" body=""
	I1213 10:31:51.476574  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:51.476839  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:51.976777  941476 type.go:168] "Request Body" body=""
	I1213 10:31:51.976892  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:51.977255  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:51.977312  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:52.476019  941476 type.go:168] "Request Body" body=""
	I1213 10:31:52.476115  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:52.476505  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:52.975986  941476 type.go:168] "Request Body" body=""
	I1213 10:31:52.976066  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:52.976377  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:53.476003  941476 type.go:168] "Request Body" body=""
	I1213 10:31:53.476081  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:53.476419  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:53.976122  941476 type.go:168] "Request Body" body=""
	I1213 10:31:53.976204  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:53.976539  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:54.476283  941476 type.go:168] "Request Body" body=""
	I1213 10:31:54.476358  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:54.476609  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:54.476652  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:54.976007  941476 type.go:168] "Request Body" body=""
	I1213 10:31:54.976081  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:54.976403  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:55.476020  941476 type.go:168] "Request Body" body=""
	I1213 10:31:55.476101  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:55.476465  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:55.976175  941476 type.go:168] "Request Body" body=""
	I1213 10:31:55.976246  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:55.976517  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:56.476006  941476 type.go:168] "Request Body" body=""
	I1213 10:31:56.476086  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:56.476452  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:56.976011  941476 type.go:168] "Request Body" body=""
	I1213 10:31:56.976090  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:56.976453  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:56.976513  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:57.476145  941476 type.go:168] "Request Body" body=""
	I1213 10:31:57.476215  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:57.476478  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:57.976009  941476 type.go:168] "Request Body" body=""
	I1213 10:31:57.976085  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:57.976451  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:58.476036  941476 type.go:168] "Request Body" body=""
	I1213 10:31:58.476114  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:58.476420  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:58.976112  941476 type.go:168] "Request Body" body=""
	I1213 10:31:58.976184  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:58.976451  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:31:59.476021  941476 type.go:168] "Request Body" body=""
	I1213 10:31:59.476097  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:59.476444  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:31:59.476501  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:31:59.976030  941476 type.go:168] "Request Body" body=""
	I1213 10:31:59.976103  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:31:59.976445  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:00.476019  941476 type.go:168] "Request Body" body=""
	I1213 10:32:00.476100  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:00.476422  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:00.976042  941476 type.go:168] "Request Body" body=""
	I1213 10:32:00.976122  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:00.976457  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:01.476038  941476 type.go:168] "Request Body" body=""
	I1213 10:32:01.476135  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:01.476461  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:01.476525  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:01.976433  941476 type.go:168] "Request Body" body=""
	I1213 10:32:01.976500  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:01.976760  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:02.476646  941476 type.go:168] "Request Body" body=""
	I1213 10:32:02.476736  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:02.477125  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:02.976957  941476 type.go:168] "Request Body" body=""
	I1213 10:32:02.977037  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:02.977386  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:03.476003  941476 type.go:168] "Request Body" body=""
	I1213 10:32:03.476067  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:03.476327  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:03.976021  941476 type.go:168] "Request Body" body=""
	I1213 10:32:03.976099  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:03.976425  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:03.976487  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:04.476032  941476 type.go:168] "Request Body" body=""
	I1213 10:32:04.476118  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:04.476477  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:04.976189  941476 type.go:168] "Request Body" body=""
	I1213 10:32:04.976259  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:04.976524  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:05.476054  941476 type.go:168] "Request Body" body=""
	I1213 10:32:05.476131  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:05.476494  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:05.976279  941476 type.go:168] "Request Body" body=""
	I1213 10:32:05.976358  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:05.976703  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:05.976759  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:06.476417  941476 type.go:168] "Request Body" body=""
	I1213 10:32:06.476497  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:06.476760  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:06.976645  941476 type.go:168] "Request Body" body=""
	I1213 10:32:06.976724  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:06.977077  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:07.476899  941476 type.go:168] "Request Body" body=""
	I1213 10:32:07.476981  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:07.477364  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:07.976070  941476 type.go:168] "Request Body" body=""
	I1213 10:32:07.976148  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:07.976442  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:08.476070  941476 type.go:168] "Request Body" body=""
	I1213 10:32:08.476152  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:08.476469  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:08.476525  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:08.976049  941476 type.go:168] "Request Body" body=""
	I1213 10:32:08.976129  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:08.976453  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:09.475983  941476 type.go:168] "Request Body" body=""
	I1213 10:32:09.476056  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:09.476367  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:09.976056  941476 type.go:168] "Request Body" body=""
	I1213 10:32:09.976139  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:09.976488  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:10.476201  941476 type.go:168] "Request Body" body=""
	I1213 10:32:10.476278  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:10.476604  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:10.476662  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:10.975985  941476 type.go:168] "Request Body" body=""
	I1213 10:32:10.976066  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:10.976386  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:11.476030  941476 type.go:168] "Request Body" body=""
	I1213 10:32:11.476107  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:11.476435  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:11.976014  941476 type.go:168] "Request Body" body=""
	I1213 10:32:11.976091  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:11.976414  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:12.475989  941476 type.go:168] "Request Body" body=""
	I1213 10:32:12.476059  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:12.476328  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:12.976032  941476 type.go:168] "Request Body" body=""
	I1213 10:32:12.976113  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:12.976433  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:12.976487  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:13.476035  941476 type.go:168] "Request Body" body=""
	I1213 10:32:13.476108  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:13.476457  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:13.976139  941476 type.go:168] "Request Body" body=""
	I1213 10:32:13.976217  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:13.976477  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:14.476065  941476 type.go:168] "Request Body" body=""
	I1213 10:32:14.476149  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:14.476488  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:14.976200  941476 type.go:168] "Request Body" body=""
	I1213 10:32:14.976280  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:14.976630  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:14.976691  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:15.476331  941476 type.go:168] "Request Body" body=""
	I1213 10:32:15.476407  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:15.476718  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:15.976843  941476 type.go:168] "Request Body" body=""
	I1213 10:32:15.976916  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:15.977265  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:16.476944  941476 type.go:168] "Request Body" body=""
	I1213 10:32:16.477018  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:16.477394  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:16.976098  941476 type.go:168] "Request Body" body=""
	I1213 10:32:16.976173  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:16.976437  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:17.476036  941476 type.go:168] "Request Body" body=""
	I1213 10:32:17.476113  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:17.476455  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:17.476515  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:17.976191  941476 type.go:168] "Request Body" body=""
	I1213 10:32:17.976268  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:17.976582  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:18.475997  941476 type.go:168] "Request Body" body=""
	I1213 10:32:18.476079  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:18.476340  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:18.976113  941476 type.go:168] "Request Body" body=""
	I1213 10:32:18.976206  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:18.976563  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:19.476049  941476 type.go:168] "Request Body" body=""
	I1213 10:32:19.476129  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:19.476456  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:19.976098  941476 type.go:168] "Request Body" body=""
	I1213 10:32:19.976166  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:19.976467  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:19.976522  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:20.476043  941476 type.go:168] "Request Body" body=""
	I1213 10:32:20.476118  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:20.476441  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:20.976163  941476 type.go:168] "Request Body" body=""
	I1213 10:32:20.976242  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:20.976531  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:21.475975  941476 type.go:168] "Request Body" body=""
	I1213 10:32:21.476045  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:21.476354  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:21.976036  941476 type.go:168] "Request Body" body=""
	I1213 10:32:21.976111  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:21.976471  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:22.476157  941476 type.go:168] "Request Body" body=""
	I1213 10:32:22.476236  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:22.476595  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:22.476649  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:22.975989  941476 type.go:168] "Request Body" body=""
	I1213 10:32:22.976063  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:22.976350  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:23.476043  941476 type.go:168] "Request Body" body=""
	I1213 10:32:23.476117  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:23.476465  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:23.976206  941476 type.go:168] "Request Body" body=""
	I1213 10:32:23.976283  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:23.976637  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:24.475985  941476 type.go:168] "Request Body" body=""
	I1213 10:32:24.476065  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:24.476346  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:24.976054  941476 type.go:168] "Request Body" body=""
	I1213 10:32:24.976136  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:24.976464  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:24.976520  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:25.476178  941476 type.go:168] "Request Body" body=""
	I1213 10:32:25.476258  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:25.476612  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:25.976593  941476 type.go:168] "Request Body" body=""
	I1213 10:32:25.976662  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:25.976936  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:26.476747  941476 type.go:168] "Request Body" body=""
	I1213 10:32:26.476821  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:26.477090  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:26.975948  941476 type.go:168] "Request Body" body=""
	I1213 10:32:26.976024  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:26.976402  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:27.476084  941476 type.go:168] "Request Body" body=""
	I1213 10:32:27.476158  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:27.476433  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:27.476474  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:27.976004  941476 type.go:168] "Request Body" body=""
	I1213 10:32:27.976087  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:27.976410  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:28.476155  941476 type.go:168] "Request Body" body=""
	I1213 10:32:28.476244  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:28.476588  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:28.976255  941476 type.go:168] "Request Body" body=""
	I1213 10:32:28.976331  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:28.976594  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:29.476028  941476 type.go:168] "Request Body" body=""
	I1213 10:32:29.476107  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:29.476476  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:29.476531  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:29.976055  941476 type.go:168] "Request Body" body=""
	I1213 10:32:29.976132  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:29.976460  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:30.475985  941476 type.go:168] "Request Body" body=""
	I1213 10:32:30.476059  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:30.476378  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:30.976032  941476 type.go:168] "Request Body" body=""
	I1213 10:32:30.976108  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:30.976436  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:31.476042  941476 type.go:168] "Request Body" body=""
	I1213 10:32:31.476119  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:31.476446  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:31.976398  941476 type.go:168] "Request Body" body=""
	I1213 10:32:31.976466  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:31.976719  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:31.976758  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:32.476588  941476 type.go:168] "Request Body" body=""
	I1213 10:32:32.476670  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:32.477064  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:32.976842  941476 type.go:168] "Request Body" body=""
	I1213 10:32:32.976917  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:32.977255  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:33.475960  941476 type.go:168] "Request Body" body=""
	I1213 10:32:33.476032  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:33.476294  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:33.975991  941476 type.go:168] "Request Body" body=""
	I1213 10:32:33.976070  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:33.976448  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:34.476153  941476 type.go:168] "Request Body" body=""
	I1213 10:32:34.476241  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:34.476568  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:34.476624  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:34.976261  941476 type.go:168] "Request Body" body=""
	I1213 10:32:34.976336  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:34.976618  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:35.476037  941476 type.go:168] "Request Body" body=""
	I1213 10:32:35.476116  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:35.476453  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:35.976396  941476 type.go:168] "Request Body" body=""
	I1213 10:32:35.976472  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:35.976804  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:36.476554  941476 type.go:168] "Request Body" body=""
	I1213 10:32:36.476624  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:36.476895  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:36.476937  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:36.976884  941476 type.go:168] "Request Body" body=""
	I1213 10:32:36.976958  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:36.977293  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:37.476031  941476 type.go:168] "Request Body" body=""
	I1213 10:32:37.476114  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:37.476465  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:37.976004  941476 type.go:168] "Request Body" body=""
	I1213 10:32:37.976074  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:37.976340  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:38.476062  941476 type.go:168] "Request Body" body=""
	I1213 10:32:38.476138  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:38.476437  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:38.976005  941476 type.go:168] "Request Body" body=""
	I1213 10:32:38.976078  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:38.976403  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:38.976454  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:39.475982  941476 type.go:168] "Request Body" body=""
	I1213 10:32:39.476059  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:39.476428  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:39.976002  941476 type.go:168] "Request Body" body=""
	I1213 10:32:39.976082  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:39.976414  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:40.476038  941476 type.go:168] "Request Body" body=""
	I1213 10:32:40.476118  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:40.476462  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:40.976166  941476 type.go:168] "Request Body" body=""
	I1213 10:32:40.976245  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:40.976502  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:40.976544  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:41.476000  941476 type.go:168] "Request Body" body=""
	I1213 10:32:41.476073  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:41.476433  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:41.976208  941476 type.go:168] "Request Body" body=""
	I1213 10:32:41.976289  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:41.976643  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:42.475983  941476 type.go:168] "Request Body" body=""
	I1213 10:32:42.476059  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:42.476353  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:42.976069  941476 type.go:168] "Request Body" body=""
	I1213 10:32:42.976137  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:42.976430  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:43.476306  941476 type.go:168] "Request Body" body=""
	I1213 10:32:43.476396  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:43.476750  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:43.476809  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:43.976720  941476 type.go:168] "Request Body" body=""
	I1213 10:32:43.976798  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:43.977089  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:44.477009  941476 type.go:168] "Request Body" body=""
	I1213 10:32:44.477085  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:44.477386  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:44.976767  941476 type.go:168] "Request Body" body=""
	I1213 10:32:44.976848  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:44.977176  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:45.475924  941476 type.go:168] "Request Body" body=""
	I1213 10:32:45.476036  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:45.476370  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:45.975913  941476 type.go:168] "Request Body" body=""
	I1213 10:32:45.975984  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:45.976317  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:45.976387  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:46.476025  941476 type.go:168] "Request Body" body=""
	I1213 10:32:46.476110  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:46.476450  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:46.975972  941476 type.go:168] "Request Body" body=""
	I1213 10:32:46.976040  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:46.976351  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:47.476004  941476 type.go:168] "Request Body" body=""
	I1213 10:32:47.476136  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:47.476459  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:47.976017  941476 type.go:168] "Request Body" body=""
	I1213 10:32:47.976089  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:47.976421  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:47.976477  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:48.476128  941476 type.go:168] "Request Body" body=""
	I1213 10:32:48.476203  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:48.476459  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:48.976015  941476 type.go:168] "Request Body" body=""
	I1213 10:32:48.976089  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:48.976419  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:49.476032  941476 type.go:168] "Request Body" body=""
	I1213 10:32:49.476106  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:49.476423  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:49.975990  941476 type.go:168] "Request Body" body=""
	I1213 10:32:49.976065  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:49.976312  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:50.476026  941476 type.go:168] "Request Body" body=""
	I1213 10:32:50.476104  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:50.476430  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:50.476486  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:50.976049  941476 type.go:168] "Request Body" body=""
	I1213 10:32:50.976131  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:50.976481  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:51.476188  941476 type.go:168] "Request Body" body=""
	I1213 10:32:51.476259  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:51.476529  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:51.976428  941476 type.go:168] "Request Body" body=""
	I1213 10:32:51.976507  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:51.976844  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:52.476643  941476 type.go:168] "Request Body" body=""
	I1213 10:32:52.476721  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:52.477067  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:52.477124  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:52.976867  941476 type.go:168] "Request Body" body=""
	I1213 10:32:52.976936  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:52.977207  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:53.475946  941476 type.go:168] "Request Body" body=""
	I1213 10:32:53.476027  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:53.476328  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:53.975930  941476 type.go:168] "Request Body" body=""
	I1213 10:32:53.976034  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:53.976391  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:54.475960  941476 type.go:168] "Request Body" body=""
	I1213 10:32:54.476035  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:54.476297  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:54.975999  941476 type.go:168] "Request Body" body=""
	I1213 10:32:54.976070  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:54.976357  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:54.976407  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:55.476011  941476 type.go:168] "Request Body" body=""
	I1213 10:32:55.476101  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:55.476377  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:55.976254  941476 type.go:168] "Request Body" body=""
	I1213 10:32:55.976330  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:55.976613  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:56.476032  941476 type.go:168] "Request Body" body=""
	I1213 10:32:56.476109  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:56.476457  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:56.976037  941476 type.go:168] "Request Body" body=""
	I1213 10:32:56.976111  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:56.976434  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:56.976489  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:57.475980  941476 type.go:168] "Request Body" body=""
	I1213 10:32:57.476061  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:57.476382  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:57.976008  941476 type.go:168] "Request Body" body=""
	I1213 10:32:57.976084  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:57.976417  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:58.476035  941476 type.go:168] "Request Body" body=""
	I1213 10:32:58.476116  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:58.476441  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:58.975991  941476 type.go:168] "Request Body" body=""
	I1213 10:32:58.976067  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:58.976351  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:32:59.476097  941476 type.go:168] "Request Body" body=""
	I1213 10:32:59.476175  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:59.476508  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:32:59.476569  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:32:59.976006  941476 type.go:168] "Request Body" body=""
	I1213 10:32:59.976086  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:32:59.976416  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:00.476102  941476 type.go:168] "Request Body" body=""
	I1213 10:33:00.476181  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:00.476460  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:00.976047  941476 type.go:168] "Request Body" body=""
	I1213 10:33:00.976134  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:00.976487  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:01.476029  941476 type.go:168] "Request Body" body=""
	I1213 10:33:01.476105  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:01.476429  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:01.975971  941476 type.go:168] "Request Body" body=""
	I1213 10:33:01.976042  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:01.976355  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:01.976407  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:02.476023  941476 type.go:168] "Request Body" body=""
	I1213 10:33:02.476094  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:02.476438  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:02.976170  941476 type.go:168] "Request Body" body=""
	I1213 10:33:02.976252  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:02.976630  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:03.476323  941476 type.go:168] "Request Body" body=""
	I1213 10:33:03.476399  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:03.476657  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:03.976052  941476 type.go:168] "Request Body" body=""
	I1213 10:33:03.976134  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:03.976463  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:03.976518  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:04.476187  941476 type.go:168] "Request Body" body=""
	I1213 10:33:04.476262  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:04.476613  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:04.976299  941476 type.go:168] "Request Body" body=""
	I1213 10:33:04.976377  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:04.976641  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:05.476304  941476 type.go:168] "Request Body" body=""
	I1213 10:33:05.476380  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:05.476711  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:05.976815  941476 type.go:168] "Request Body" body=""
	I1213 10:33:05.976895  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:05.977239  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:05.977294  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:06.475975  941476 type.go:168] "Request Body" body=""
	I1213 10:33:06.476047  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:06.476308  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:06.976045  941476 type.go:168] "Request Body" body=""
	I1213 10:33:06.976148  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:06.976516  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:07.476071  941476 type.go:168] "Request Body" body=""
	I1213 10:33:07.476148  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:07.476544  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:07.976078  941476 type.go:168] "Request Body" body=""
	I1213 10:33:07.976149  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:07.976402  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:08.476027  941476 type.go:168] "Request Body" body=""
	I1213 10:33:08.476099  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:08.476433  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:08.476487  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:08.976023  941476 type.go:168] "Request Body" body=""
	I1213 10:33:08.976112  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:08.976462  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:09.476176  941476 type.go:168] "Request Body" body=""
	I1213 10:33:09.476251  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:09.476526  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:09.976025  941476 type.go:168] "Request Body" body=""
	I1213 10:33:09.976104  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:09.976463  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:10.476184  941476 type.go:168] "Request Body" body=""
	I1213 10:33:10.476271  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:10.476609  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:10.476665  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:10.975992  941476 type.go:168] "Request Body" body=""
	I1213 10:33:10.976076  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:10.976358  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:11.476054  941476 type.go:168] "Request Body" body=""
	I1213 10:33:11.476129  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:11.476473  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:11.976030  941476 type.go:168] "Request Body" body=""
	I1213 10:33:11.976106  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:11.976465  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:12.476140  941476 type.go:168] "Request Body" body=""
	I1213 10:33:12.476209  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:12.476469  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:12.976013  941476 type.go:168] "Request Body" body=""
	I1213 10:33:12.976099  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:12.976394  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:12.976444  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:13.476111  941476 type.go:168] "Request Body" body=""
	I1213 10:33:13.476187  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:13.476533  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:13.976215  941476 type.go:168] "Request Body" body=""
	I1213 10:33:13.976284  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:13.976554  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:14.476030  941476 type.go:168] "Request Body" body=""
	I1213 10:33:14.476112  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:14.476450  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:14.976164  941476 type.go:168] "Request Body" body=""
	I1213 10:33:14.976241  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:14.976581  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:14.976644  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:15.475977  941476 type.go:168] "Request Body" body=""
	I1213 10:33:15.476046  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:15.476298  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:15.975945  941476 type.go:168] "Request Body" body=""
	I1213 10:33:15.976032  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:15.976414  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:16.476144  941476 type.go:168] "Request Body" body=""
	I1213 10:33:16.476219  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:16.476559  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:16.976466  941476 type.go:168] "Request Body" body=""
	I1213 10:33:16.976541  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:16.976809  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:16.976860  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:17.476687  941476 type.go:168] "Request Body" body=""
	I1213 10:33:17.476761  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:17.477087  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:17.976932  941476 type.go:168] "Request Body" body=""
	I1213 10:33:17.977005  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:17.977321  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:18.476000  941476 type.go:168] "Request Body" body=""
	I1213 10:33:18.476076  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:18.476392  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:18.976021  941476 type.go:168] "Request Body" body=""
	I1213 10:33:18.976114  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:18.976472  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:19.476006  941476 type.go:168] "Request Body" body=""
	I1213 10:33:19.476090  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:19.476437  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:19.476492  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:19.975984  941476 type.go:168] "Request Body" body=""
	I1213 10:33:19.976061  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:19.976331  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:20.476028  941476 type.go:168] "Request Body" body=""
	I1213 10:33:20.476114  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:20.476446  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:20.976140  941476 type.go:168] "Request Body" body=""
	I1213 10:33:20.976215  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:20.976570  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:21.476259  941476 type.go:168] "Request Body" body=""
	I1213 10:33:21.476335  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:21.476598  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:21.476641  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:21.976641  941476 type.go:168] "Request Body" body=""
	I1213 10:33:21.976721  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:21.977055  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:22.476842  941476 type.go:168] "Request Body" body=""
	I1213 10:33:22.476921  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:22.477263  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:22.975958  941476 type.go:168] "Request Body" body=""
	I1213 10:33:22.976026  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:22.976279  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:23.476028  941476 type.go:168] "Request Body" body=""
	I1213 10:33:23.476115  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:23.476440  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:23.976154  941476 type.go:168] "Request Body" body=""
	I1213 10:33:23.976230  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:23.976599  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:23.976655  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:24.476302  941476 type.go:168] "Request Body" body=""
	I1213 10:33:24.476382  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:24.476643  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:24.976013  941476 type.go:168] "Request Body" body=""
	I1213 10:33:24.976088  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:24.976409  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:25.476125  941476 type.go:168] "Request Body" body=""
	I1213 10:33:25.476201  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:25.476538  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:25.976508  941476 type.go:168] "Request Body" body=""
	I1213 10:33:25.976580  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:25.976838  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:25.976879  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:26.476580  941476 type.go:168] "Request Body" body=""
	I1213 10:33:26.476662  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:26.476989  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:26.975922  941476 type.go:168] "Request Body" body=""
	I1213 10:33:26.976010  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:26.976354  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:27.476098  941476 type.go:168] "Request Body" body=""
	I1213 10:33:27.476184  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:27.476458  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:27.976019  941476 type.go:168] "Request Body" body=""
	I1213 10:33:27.976107  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:27.976466  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:28.476177  941476 type.go:168] "Request Body" body=""
	I1213 10:33:28.476256  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:28.476603  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:28.476659  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:28.975989  941476 type.go:168] "Request Body" body=""
	I1213 10:33:28.976063  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:28.976324  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:29.475991  941476 type.go:168] "Request Body" body=""
	I1213 10:33:29.476066  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:29.476404  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:29.975996  941476 type.go:168] "Request Body" body=""
	I1213 10:33:29.976077  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:29.976425  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:30.476099  941476 type.go:168] "Request Body" body=""
	I1213 10:33:30.476165  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:30.476425  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:30.976057  941476 type.go:168] "Request Body" body=""
	I1213 10:33:30.976139  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:30.976428  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:30.976479  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:31.476028  941476 type.go:168] "Request Body" body=""
	I1213 10:33:31.476102  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:31.476433  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:31.975996  941476 type.go:168] "Request Body" body=""
	I1213 10:33:31.976062  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:31.976317  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:32.476036  941476 type.go:168] "Request Body" body=""
	I1213 10:33:32.476123  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:32.476463  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:32.976156  941476 type.go:168] "Request Body" body=""
	I1213 10:33:32.976239  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:32.976579  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:32.976636  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:33.476103  941476 type.go:168] "Request Body" body=""
	I1213 10:33:33.476175  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:33.476433  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:33.976025  941476 type.go:168] "Request Body" body=""
	I1213 10:33:33.976098  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:33.976421  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:34.476116  941476 type.go:168] "Request Body" body=""
	I1213 10:33:34.476189  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:34.476493  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:34.976055  941476 type.go:168] "Request Body" body=""
	I1213 10:33:34.976123  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:34.976382  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:35.476023  941476 type.go:168] "Request Body" body=""
	I1213 10:33:35.476099  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:35.476443  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:35.476499  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:35.975939  941476 type.go:168] "Request Body" body=""
	I1213 10:33:35.976014  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:35.976367  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:36.475982  941476 type.go:168] "Request Body" body=""
	I1213 10:33:36.476086  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:36.476409  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:36.976046  941476 type.go:168] "Request Body" body=""
	I1213 10:33:36.976117  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:36.976443  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:37.476164  941476 type.go:168] "Request Body" body=""
	I1213 10:33:37.476242  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:37.476524  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:37.476575  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:37.976198  941476 type.go:168] "Request Body" body=""
	I1213 10:33:37.976275  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:37.976533  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:38.476039  941476 type.go:168] "Request Body" body=""
	I1213 10:33:38.476112  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:38.476422  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:38.976114  941476 type.go:168] "Request Body" body=""
	I1213 10:33:38.976199  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:38.976530  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:39.476088  941476 type.go:168] "Request Body" body=""
	I1213 10:33:39.476161  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:39.476422  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:39.976009  941476 type.go:168] "Request Body" body=""
	I1213 10:33:39.976084  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:39.976397  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:39.976449  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:40.476000  941476 type.go:168] "Request Body" body=""
	I1213 10:33:40.476076  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:40.476414  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:40.976095  941476 type.go:168] "Request Body" body=""
	I1213 10:33:40.976167  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:40.976436  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:41.476022  941476 type.go:168] "Request Body" body=""
	I1213 10:33:41.476094  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:41.476397  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:41.976013  941476 type.go:168] "Request Body" body=""
	I1213 10:33:41.976092  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:41.976658  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:41.976706  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:42.475980  941476 type.go:168] "Request Body" body=""
	I1213 10:33:42.476055  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:42.476675  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:42.976377  941476 type.go:168] "Request Body" body=""
	I1213 10:33:42.976455  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:42.976815  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:43.476621  941476 type.go:168] "Request Body" body=""
	I1213 10:33:43.476701  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:43.477037  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:43.976823  941476 type.go:168] "Request Body" body=""
	I1213 10:33:43.976888  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:43.977141  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:43.977181  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:44.476934  941476 type.go:168] "Request Body" body=""
	I1213 10:33:44.477006  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:44.477335  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:44.976009  941476 type.go:168] "Request Body" body=""
	I1213 10:33:44.976100  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:44.976470  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:45.476027  941476 type.go:168] "Request Body" body=""
	I1213 10:33:45.476103  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:45.476385  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:45.976244  941476 type.go:168] "Request Body" body=""
	I1213 10:33:45.976320  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:45.976638  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:46.476051  941476 type.go:168] "Request Body" body=""
	I1213 10:33:46.476134  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:46.476479  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:46.476535  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:46.975987  941476 type.go:168] "Request Body" body=""
	I1213 10:33:46.976061  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:46.976313  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:47.476031  941476 type.go:168] "Request Body" body=""
	I1213 10:33:47.476113  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:47.476457  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:47.976041  941476 type.go:168] "Request Body" body=""
	I1213 10:33:47.976125  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:47.976473  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:48.476166  941476 type.go:168] "Request Body" body=""
	I1213 10:33:48.476241  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:48.476522  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:48.476583  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:48.975991  941476 type.go:168] "Request Body" body=""
	I1213 10:33:48.976075  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:48.976407  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:49.476115  941476 type.go:168] "Request Body" body=""
	I1213 10:33:49.476190  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:49.476513  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:49.975984  941476 type.go:168] "Request Body" body=""
	I1213 10:33:49.976052  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:49.976304  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:50.476021  941476 type.go:168] "Request Body" body=""
	I1213 10:33:50.476105  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:50.476430  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:50.976125  941476 type.go:168] "Request Body" body=""
	I1213 10:33:50.976206  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:50.976556  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:50.976613  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:51.476129  941476 type.go:168] "Request Body" body=""
	I1213 10:33:51.476201  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:51.476471  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:51.976229  941476 type.go:168] "Request Body" body=""
	I1213 10:33:51.976307  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:51.976619  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:52.476357  941476 type.go:168] "Request Body" body=""
	I1213 10:33:52.476455  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:52.476789  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:52.976543  941476 type.go:168] "Request Body" body=""
	I1213 10:33:52.976619  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:52.976876  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:52.976919  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:53.476697  941476 type.go:168] "Request Body" body=""
	I1213 10:33:53.476776  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:53.477117  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:53.976881  941476 type.go:168] "Request Body" body=""
	I1213 10:33:53.976953  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:53.977282  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:54.476946  941476 type.go:168] "Request Body" body=""
	I1213 10:33:54.477041  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:54.477322  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:54.976021  941476 type.go:168] "Request Body" body=""
	I1213 10:33:54.976100  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:54.976426  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:55.476042  941476 type.go:168] "Request Body" body=""
	I1213 10:33:55.476124  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:55.476467  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:55.476548  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:55.976476  941476 type.go:168] "Request Body" body=""
	I1213 10:33:55.976544  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:55.976834  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:56.476663  941476 type.go:168] "Request Body" body=""
	I1213 10:33:56.476741  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:56.477071  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:56.975949  941476 type.go:168] "Request Body" body=""
	I1213 10:33:56.976040  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:56.976420  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:57.475988  941476 type.go:168] "Request Body" body=""
	I1213 10:33:57.476057  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:57.476315  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:57.976051  941476 type.go:168] "Request Body" body=""
	I1213 10:33:57.976129  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:57.976419  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:57.976467  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:33:58.476120  941476 type.go:168] "Request Body" body=""
	I1213 10:33:58.476204  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:58.476550  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:58.976099  941476 type.go:168] "Request Body" body=""
	I1213 10:33:58.976165  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:58.976418  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:59.476030  941476 type.go:168] "Request Body" body=""
	I1213 10:33:59.476110  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:59.476450  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:33:59.976134  941476 type.go:168] "Request Body" body=""
	I1213 10:33:59.976218  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:33:59.976654  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:33:59.976717  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:00.476365  941476 type.go:168] "Request Body" body=""
	I1213 10:34:00.476441  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:00.476723  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:00.976554  941476 type.go:168] "Request Body" body=""
	I1213 10:34:00.976626  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:00.976899  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:01.476692  941476 type.go:168] "Request Body" body=""
	I1213 10:34:01.476765  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:01.477095  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:01.976838  941476 type.go:168] "Request Body" body=""
	I1213 10:34:01.976916  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:01.977190  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:01.977235  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:02.475885  941476 type.go:168] "Request Body" body=""
	I1213 10:34:02.475972  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:02.476308  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:02.976032  941476 type.go:168] "Request Body" body=""
	I1213 10:34:02.976106  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:02.976439  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:03.476117  941476 type.go:168] "Request Body" body=""
	I1213 10:34:03.476185  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:03.476511  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:03.976078  941476 type.go:168] "Request Body" body=""
	I1213 10:34:03.976164  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:03.976510  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:04.476128  941476 type.go:168] "Request Body" body=""
	I1213 10:34:04.476208  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:04.476533  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:04.476591  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:04.975971  941476 type.go:168] "Request Body" body=""
	I1213 10:34:04.976047  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:04.976363  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:05.476003  941476 type.go:168] "Request Body" body=""
	I1213 10:34:05.476075  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:05.476405  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:05.976170  941476 type.go:168] "Request Body" body=""
	I1213 10:34:05.976243  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:05.976545  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:06.476105  941476 type.go:168] "Request Body" body=""
	I1213 10:34:06.476180  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:06.476517  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:06.976527  941476 type.go:168] "Request Body" body=""
	I1213 10:34:06.976615  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:06.976986  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:06.977056  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:07.476804  941476 type.go:168] "Request Body" body=""
	I1213 10:34:07.476894  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:07.477246  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:07.975926  941476 type.go:168] "Request Body" body=""
	I1213 10:34:07.975997  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:07.976254  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:08.476006  941476 type.go:168] "Request Body" body=""
	I1213 10:34:08.476102  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:08.476453  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:08.976173  941476 type.go:168] "Request Body" body=""
	I1213 10:34:08.976254  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:08.976538  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:09.476199  941476 type.go:168] "Request Body" body=""
	I1213 10:34:09.476277  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:09.476604  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:09.476655  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:09.976021  941476 type.go:168] "Request Body" body=""
	I1213 10:34:09.976097  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:09.976401  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:10.476158  941476 type.go:168] "Request Body" body=""
	I1213 10:34:10.476243  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:10.476583  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:10.975974  941476 type.go:168] "Request Body" body=""
	I1213 10:34:10.976050  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:10.976361  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:11.476040  941476 type.go:168] "Request Body" body=""
	I1213 10:34:11.476133  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:11.476485  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:11.976462  941476 type.go:168] "Request Body" body=""
	I1213 10:34:11.976535  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:11.976826  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:11.976874  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:12.476533  941476 type.go:168] "Request Body" body=""
	I1213 10:34:12.476615  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:12.476871  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:12.976701  941476 type.go:168] "Request Body" body=""
	I1213 10:34:12.976782  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:12.977103  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:13.476950  941476 type.go:168] "Request Body" body=""
	I1213 10:34:13.477040  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:13.477394  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:13.976070  941476 type.go:168] "Request Body" body=""
	I1213 10:34:13.976154  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:13.976430  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:14.476035  941476 type.go:168] "Request Body" body=""
	I1213 10:34:14.476112  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:14.476448  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:14.476506  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:14.976192  941476 type.go:168] "Request Body" body=""
	I1213 10:34:14.976290  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:14.976612  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:15.475988  941476 type.go:168] "Request Body" body=""
	I1213 10:34:15.476090  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:15.476371  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:15.975973  941476 type.go:168] "Request Body" body=""
	I1213 10:34:15.976053  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:15.976336  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:16.476054  941476 type.go:168] "Request Body" body=""
	I1213 10:34:16.476134  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:16.476457  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:16.976232  941476 type.go:168] "Request Body" body=""
	I1213 10:34:16.976307  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:16.976573  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:16.976615  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:17.476042  941476 type.go:168] "Request Body" body=""
	I1213 10:34:17.476118  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:17.476467  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:17.976182  941476 type.go:168] "Request Body" body=""
	I1213 10:34:17.976258  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:17.976609  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:18.476297  941476 type.go:168] "Request Body" body=""
	I1213 10:34:18.476413  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:18.476678  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:18.976046  941476 type.go:168] "Request Body" body=""
	I1213 10:34:18.976123  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:18.976446  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:19.476033  941476 type.go:168] "Request Body" body=""
	I1213 10:34:19.476117  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:19.476440  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:19.476499  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:19.976045  941476 type.go:168] "Request Body" body=""
	I1213 10:34:19.976129  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:19.976535  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:20.476021  941476 type.go:168] "Request Body" body=""
	I1213 10:34:20.476097  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:20.476428  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:20.976025  941476 type.go:168] "Request Body" body=""
	I1213 10:34:20.976107  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:20.976470  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:21.476149  941476 type.go:168] "Request Body" body=""
	I1213 10:34:21.476232  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:21.476535  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:21.476579  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:21.976488  941476 type.go:168] "Request Body" body=""
	I1213 10:34:21.976565  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:21.976917  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:22.476734  941476 type.go:168] "Request Body" body=""
	I1213 10:34:22.476814  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:22.477160  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:22.976929  941476 type.go:168] "Request Body" body=""
	I1213 10:34:22.977004  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:22.977264  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:23.475962  941476 type.go:168] "Request Body" body=""
	I1213 10:34:23.476043  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:23.476394  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:23.975992  941476 type.go:168] "Request Body" body=""
	I1213 10:34:23.976073  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:23.976409  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:23.976471  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:24.475994  941476 type.go:168] "Request Body" body=""
	I1213 10:34:24.476067  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:24.476343  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:24.976030  941476 type.go:168] "Request Body" body=""
	I1213 10:34:24.976113  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:24.976425  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:25.476120  941476 type.go:168] "Request Body" body=""
	I1213 10:34:25.476209  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:25.476597  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:25.976332  941476 type.go:168] "Request Body" body=""
	I1213 10:34:25.976407  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:25.976654  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:25.976698  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:26.476026  941476 type.go:168] "Request Body" body=""
	I1213 10:34:26.476112  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:26.476445  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:26.976015  941476 type.go:168] "Request Body" body=""
	I1213 10:34:26.976105  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:26.976489  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:27.476210  941476 type.go:168] "Request Body" body=""
	I1213 10:34:27.476283  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:27.476557  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:27.976225  941476 type.go:168] "Request Body" body=""
	I1213 10:34:27.976306  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:27.976615  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:28.476015  941476 type.go:168] "Request Body" body=""
	I1213 10:34:28.476091  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:28.476427  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:28.476486  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:28.976015  941476 type.go:168] "Request Body" body=""
	I1213 10:34:28.976082  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:28.976344  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:29.476028  941476 type.go:168] "Request Body" body=""
	I1213 10:34:29.476111  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:29.476510  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:29.976204  941476 type.go:168] "Request Body" body=""
	I1213 10:34:29.976284  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:29.976620  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:30.476184  941476 type.go:168] "Request Body" body=""
	I1213 10:34:30.476253  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:30.476523  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:30.476567  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:30.976028  941476 type.go:168] "Request Body" body=""
	I1213 10:34:30.976107  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:30.976466  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:31.476057  941476 type.go:168] "Request Body" body=""
	I1213 10:34:31.476134  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:31.476442  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:31.976251  941476 type.go:168] "Request Body" body=""
	I1213 10:34:31.976330  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:31.976592  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:32.476291  941476 type.go:168] "Request Body" body=""
	I1213 10:34:32.476378  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:32.476726  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:32.476797  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:32.976040  941476 type.go:168] "Request Body" body=""
	I1213 10:34:32.976118  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:32.976497  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:33.476816  941476 type.go:168] "Request Body" body=""
	I1213 10:34:33.476896  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:33.477256  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:33.975966  941476 type.go:168] "Request Body" body=""
	I1213 10:34:33.976050  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:33.976402  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:34.476115  941476 type.go:168] "Request Body" body=""
	I1213 10:34:34.476192  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:34.476542  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:34.976227  941476 type.go:168] "Request Body" body=""
	I1213 10:34:34.976305  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:34.976571  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:34.976613  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:35.476273  941476 type.go:168] "Request Body" body=""
	I1213 10:34:35.476350  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:35.476744  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:35.976574  941476 type.go:168] "Request Body" body=""
	I1213 10:34:35.976660  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:35.976987  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:36.476793  941476 type.go:168] "Request Body" body=""
	I1213 10:34:36.476879  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:36.477161  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:36.976010  941476 type.go:168] "Request Body" body=""
	I1213 10:34:36.976112  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:36.976494  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:37.476223  941476 type.go:168] "Request Body" body=""
	I1213 10:34:37.476305  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:37.476698  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:37.476756  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:37.976397  941476 type.go:168] "Request Body" body=""
	I1213 10:34:37.976468  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:37.976743  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:38.476018  941476 type.go:168] "Request Body" body=""
	I1213 10:34:38.476101  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:38.476460  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:38.976171  941476 type.go:168] "Request Body" body=""
	I1213 10:34:38.976253  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:38.976575  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:39.475986  941476 type.go:168] "Request Body" body=""
	I1213 10:34:39.476060  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:39.476387  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:39.976027  941476 type.go:168] "Request Body" body=""
	I1213 10:34:39.976100  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:39.976461  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:39.976525  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:40.476055  941476 type.go:168] "Request Body" body=""
	I1213 10:34:40.476137  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:40.476513  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:40.975990  941476 type.go:168] "Request Body" body=""
	I1213 10:34:40.976061  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:40.976330  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:41.476024  941476 type.go:168] "Request Body" body=""
	I1213 10:34:41.476103  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:41.476438  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:41.976017  941476 type.go:168] "Request Body" body=""
	I1213 10:34:41.976103  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:41.976445  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:42.476145  941476 type.go:168] "Request Body" body=""
	I1213 10:34:42.476214  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:42.476486  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:42.476531  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:42.976037  941476 type.go:168] "Request Body" body=""
	I1213 10:34:42.976110  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:42.976445  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:43.476157  941476 type.go:168] "Request Body" body=""
	I1213 10:34:43.476237  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:43.476565  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:43.975999  941476 type.go:168] "Request Body" body=""
	I1213 10:34:43.976386  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:43.976856  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:44.476021  941476 type.go:168] "Request Body" body=""
	I1213 10:34:44.476103  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:44.476481  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:44.476557  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:44.976279  941476 type.go:168] "Request Body" body=""
	I1213 10:34:44.976368  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:44.976729  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:45.476416  941476 type.go:168] "Request Body" body=""
	I1213 10:34:45.476491  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:45.476765  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:45.976781  941476 type.go:168] "Request Body" body=""
	I1213 10:34:45.976855  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:45.977208  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:46.475938  941476 type.go:168] "Request Body" body=""
	I1213 10:34:46.476018  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:46.476395  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:46.976009  941476 type.go:168] "Request Body" body=""
	I1213 10:34:46.976076  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:46.976328  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:46.976368  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:47.475981  941476 type.go:168] "Request Body" body=""
	I1213 10:34:47.476053  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:47.476668  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:47.976228  941476 type.go:168] "Request Body" body=""
	I1213 10:34:47.976301  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:47.976633  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:48.476320  941476 type.go:168] "Request Body" body=""
	I1213 10:34:48.476388  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:48.476671  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:48.976029  941476 type.go:168] "Request Body" body=""
	I1213 10:34:48.976104  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:48.976426  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:48.976483  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:49.476176  941476 type.go:168] "Request Body" body=""
	I1213 10:34:49.476256  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:49.476626  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:49.976322  941476 type.go:168] "Request Body" body=""
	I1213 10:34:49.976392  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:49.976656  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:50.476004  941476 type.go:168] "Request Body" body=""
	I1213 10:34:50.476079  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:50.476438  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:50.976173  941476 type.go:168] "Request Body" body=""
	I1213 10:34:50.976252  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:50.976562  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:50.976609  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:51.475985  941476 type.go:168] "Request Body" body=""
	I1213 10:34:51.476058  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:51.476380  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:51.976214  941476 type.go:168] "Request Body" body=""
	I1213 10:34:51.976292  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:51.976634  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:52.476361  941476 type.go:168] "Request Body" body=""
	I1213 10:34:52.476438  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:52.476777  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:52.976550  941476 type.go:168] "Request Body" body=""
	I1213 10:34:52.976620  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:52.976884  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:52.976928  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:53.476704  941476 type.go:168] "Request Body" body=""
	I1213 10:34:53.476789  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:53.477137  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:53.976929  941476 type.go:168] "Request Body" body=""
	I1213 10:34:53.977004  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:53.977333  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:54.476034  941476 type.go:168] "Request Body" body=""
	I1213 10:34:54.476106  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:54.476377  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:54.976023  941476 type.go:168] "Request Body" body=""
	I1213 10:34:54.976105  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:54.976454  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:55.476046  941476 type.go:168] "Request Body" body=""
	I1213 10:34:55.476127  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:55.476479  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:55.476535  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:55.976212  941476 type.go:168] "Request Body" body=""
	I1213 10:34:55.976283  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:55.976540  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:56.476033  941476 type.go:168] "Request Body" body=""
	I1213 10:34:56.476112  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:56.476472  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:56.976530  941476 type.go:168] "Request Body" body=""
	I1213 10:34:56.976612  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:56.977004  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:57.476807  941476 type.go:168] "Request Body" body=""
	I1213 10:34:57.476890  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:57.477154  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:57.477196  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:34:57.977018  941476 type.go:168] "Request Body" body=""
	I1213 10:34:57.977109  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:57.977446  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:58.476146  941476 type.go:168] "Request Body" body=""
	I1213 10:34:58.476225  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:58.476550  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:58.976270  941476 type.go:168] "Request Body" body=""
	I1213 10:34:58.976346  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:58.976611  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:59.476051  941476 type.go:168] "Request Body" body=""
	I1213 10:34:59.476143  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:59.476548  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:34:59.976128  941476 type.go:168] "Request Body" body=""
	I1213 10:34:59.976213  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:34:59.976516  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:34:59.976563  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:00.476032  941476 type.go:168] "Request Body" body=""
	I1213 10:35:00.476107  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:00.476542  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:00.976317  941476 type.go:168] "Request Body" body=""
	I1213 10:35:00.976411  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:00.976761  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:01.476609  941476 type.go:168] "Request Body" body=""
	I1213 10:35:01.476689  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:01.477045  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:01.976793  941476 type.go:168] "Request Body" body=""
	I1213 10:35:01.976872  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:01.977145  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:01.977189  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:02.476982  941476 type.go:168] "Request Body" body=""
	I1213 10:35:02.477061  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:02.477408  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:02.976099  941476 type.go:168] "Request Body" body=""
	I1213 10:35:02.976178  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:02.976550  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:03.476237  941476 type.go:168] "Request Body" body=""
	I1213 10:35:03.476319  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:03.476595  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:03.976292  941476 type.go:168] "Request Body" body=""
	I1213 10:35:03.976381  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:03.976725  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:04.476528  941476 type.go:168] "Request Body" body=""
	I1213 10:35:04.476603  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:04.476926  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:04.476983  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:04.976700  941476 type.go:168] "Request Body" body=""
	I1213 10:35:04.976771  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:04.977027  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:05.476835  941476 type.go:168] "Request Body" body=""
	I1213 10:35:05.476914  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:05.477258  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:05.976201  941476 type.go:168] "Request Body" body=""
	I1213 10:35:05.976279  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:05.976630  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:06.476362  941476 type.go:168] "Request Body" body=""
	I1213 10:35:06.476440  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:06.476705  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:06.976610  941476 type.go:168] "Request Body" body=""
	I1213 10:35:06.976688  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:06.977052  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:06.977113  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:07.476898  941476 type.go:168] "Request Body" body=""
	I1213 10:35:07.476978  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:07.477359  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:07.975992  941476 type.go:168] "Request Body" body=""
	I1213 10:35:07.976075  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:07.976399  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:08.476093  941476 type.go:168] "Request Body" body=""
	I1213 10:35:08.476179  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:08.476527  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:08.976239  941476 type.go:168] "Request Body" body=""
	I1213 10:35:08.976318  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:08.976631  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:09.475998  941476 type.go:168] "Request Body" body=""
	I1213 10:35:09.476070  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:09.476334  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:09.476377  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:09.976025  941476 type.go:168] "Request Body" body=""
	I1213 10:35:09.976103  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:09.976446  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:10.476153  941476 type.go:168] "Request Body" body=""
	I1213 10:35:10.476230  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:10.476565  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:10.976284  941476 type.go:168] "Request Body" body=""
	I1213 10:35:10.976359  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:10.976641  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:11.476331  941476 type.go:168] "Request Body" body=""
	I1213 10:35:11.476408  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:11.476754  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:11.476819  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:11.976620  941476 type.go:168] "Request Body" body=""
	I1213 10:35:11.976709  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:11.977042  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:12.476813  941476 type.go:168] "Request Body" body=""
	I1213 10:35:12.476885  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:12.477142  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:12.976929  941476 type.go:168] "Request Body" body=""
	I1213 10:35:12.977022  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:12.977398  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:13.476001  941476 type.go:168] "Request Body" body=""
	I1213 10:35:13.476080  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:13.476431  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:13.976122  941476 type.go:168] "Request Body" body=""
	I1213 10:35:13.976192  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:13.976457  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:13.976500  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:14.475989  941476 type.go:168] "Request Body" body=""
	I1213 10:35:14.476065  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:14.476409  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:14.976135  941476 type.go:168] "Request Body" body=""
	I1213 10:35:14.976241  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:14.976610  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:15.476299  941476 type.go:168] "Request Body" body=""
	I1213 10:35:15.476374  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:15.476636  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:15.976597  941476 type.go:168] "Request Body" body=""
	I1213 10:35:15.976678  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:15.977009  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:15.977062  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:16.476828  941476 type.go:168] "Request Body" body=""
	I1213 10:35:16.476909  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:16.477284  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:16.975983  941476 type.go:168] "Request Body" body=""
	I1213 10:35:16.976057  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:16.976412  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:17.476005  941476 type.go:168] "Request Body" body=""
	I1213 10:35:17.476082  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:17.476426  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:17.976147  941476 type.go:168] "Request Body" body=""
	I1213 10:35:17.976234  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:17.976566  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:18.476096  941476 type.go:168] "Request Body" body=""
	I1213 10:35:18.476172  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:18.476450  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:18.476495  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:18.976034  941476 type.go:168] "Request Body" body=""
	I1213 10:35:18.976113  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:18.976435  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:19.476137  941476 type.go:168] "Request Body" body=""
	I1213 10:35:19.476227  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:19.476564  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:19.976248  941476 type.go:168] "Request Body" body=""
	I1213 10:35:19.976327  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:19.976600  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:20.476028  941476 type.go:168] "Request Body" body=""
	I1213 10:35:20.476115  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:20.476474  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:20.476531  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:20.976196  941476 type.go:168] "Request Body" body=""
	I1213 10:35:20.976277  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:20.976613  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:21.476306  941476 type.go:168] "Request Body" body=""
	I1213 10:35:21.476385  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:21.476650  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:21.976568  941476 type.go:168] "Request Body" body=""
	I1213 10:35:21.976645  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:21.976977  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:22.476793  941476 type.go:168] "Request Body" body=""
	I1213 10:35:22.476870  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:22.477217  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:22.477279  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:22.975966  941476 type.go:168] "Request Body" body=""
	I1213 10:35:22.976040  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:22.976311  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:23.476036  941476 type.go:168] "Request Body" body=""
	I1213 10:35:23.476125  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:23.476480  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:23.976070  941476 type.go:168] "Request Body" body=""
	I1213 10:35:23.976153  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:23.976505  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:24.476197  941476 type.go:168] "Request Body" body=""
	I1213 10:35:24.476265  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:24.476534  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:24.976215  941476 type.go:168] "Request Body" body=""
	I1213 10:35:24.976288  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:24.976630  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:24.976686  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:25.476352  941476 type.go:168] "Request Body" body=""
	I1213 10:35:25.476428  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:25.476773  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:25.976631  941476 type.go:168] "Request Body" body=""
	I1213 10:35:25.976701  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:25.976974  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:26.476849  941476 type.go:168] "Request Body" body=""
	I1213 10:35:26.476924  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:26.477262  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:26.976050  941476 type.go:168] "Request Body" body=""
	I1213 10:35:26.976131  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:26.976463  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:27.475977  941476 type.go:168] "Request Body" body=""
	I1213 10:35:27.476053  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:27.476355  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:27.476414  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:27.975991  941476 type.go:168] "Request Body" body=""
	I1213 10:35:27.976070  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:27.976388  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:28.476128  941476 type.go:168] "Request Body" body=""
	I1213 10:35:28.476210  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:28.476540  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:28.975989  941476 type.go:168] "Request Body" body=""
	I1213 10:35:28.976066  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:28.976327  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:29.476011  941476 type.go:168] "Request Body" body=""
	I1213 10:35:29.476091  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:29.476427  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:29.476488  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:29.976030  941476 type.go:168] "Request Body" body=""
	I1213 10:35:29.976112  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:29.976433  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:30.476132  941476 type.go:168] "Request Body" body=""
	I1213 10:35:30.476208  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:30.476494  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:30.976178  941476 type.go:168] "Request Body" body=""
	I1213 10:35:30.976260  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:30.976576  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:31.476298  941476 type.go:168] "Request Body" body=""
	I1213 10:35:31.476371  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:31.476716  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:31.476774  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:31.976573  941476 type.go:168] "Request Body" body=""
	I1213 10:35:31.976645  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:31.976917  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:32.476713  941476 type.go:168] "Request Body" body=""
	I1213 10:35:32.476790  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:32.477195  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:32.975947  941476 type.go:168] "Request Body" body=""
	I1213 10:35:32.976021  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:32.976319  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:33.476001  941476 type.go:168] "Request Body" body=""
	I1213 10:35:33.476069  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:33.476324  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:33.976034  941476 type.go:168] "Request Body" body=""
	I1213 10:35:33.976113  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:33.976453  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:33.976512  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:34.476185  941476 type.go:168] "Request Body" body=""
	I1213 10:35:34.476263  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:34.476596  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:34.975980  941476 type.go:168] "Request Body" body=""
	I1213 10:35:34.976061  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:34.976361  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:35.476014  941476 type.go:168] "Request Body" body=""
	I1213 10:35:35.476110  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:35.476451  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:35.976934  941476 type.go:168] "Request Body" body=""
	I1213 10:35:35.977011  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:35.977366  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:35.977428  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:36.476062  941476 type.go:168] "Request Body" body=""
	I1213 10:35:36.476139  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:36.476417  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:36.976261  941476 type.go:168] "Request Body" body=""
	I1213 10:35:36.976334  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:36.976678  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:37.476392  941476 type.go:168] "Request Body" body=""
	I1213 10:35:37.476480  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:37.476822  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:37.976607  941476 type.go:168] "Request Body" body=""
	I1213 10:35:37.976691  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:37.976956  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:38.476714  941476 type.go:168] "Request Body" body=""
	I1213 10:35:38.476786  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:38.477099  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:38.477160  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:38.976964  941476 type.go:168] "Request Body" body=""
	I1213 10:35:38.977048  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:38.977472  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:39.476025  941476 type.go:168] "Request Body" body=""
	I1213 10:35:39.476097  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:39.476371  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:39.976024  941476 type.go:168] "Request Body" body=""
	I1213 10:35:39.976107  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:39.976494  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:40.476166  941476 type.go:168] "Request Body" body=""
	I1213 10:35:40.476250  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:40.476607  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:40.975981  941476 type.go:168] "Request Body" body=""
	I1213 10:35:40.976060  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:40.976331  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:40.976379  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:41.476023  941476 type.go:168] "Request Body" body=""
	I1213 10:35:41.476108  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:41.476426  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:41.976057  941476 type.go:168] "Request Body" body=""
	I1213 10:35:41.976134  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:41.976442  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:42.475983  941476 type.go:168] "Request Body" body=""
	I1213 10:35:42.476061  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:42.480099  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1213 10:35:42.976928  941476 type.go:168] "Request Body" body=""
	I1213 10:35:42.977007  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:42.977373  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:42.977438  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:43.476024  941476 type.go:168] "Request Body" body=""
	I1213 10:35:43.476136  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:43.476497  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:43.976064  941476 type.go:168] "Request Body" body=""
	I1213 10:35:43.976139  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:43.976405  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:44.476012  941476 type.go:168] "Request Body" body=""
	I1213 10:35:44.476110  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:44.476457  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:44.976181  941476 type.go:168] "Request Body" body=""
	I1213 10:35:44.976260  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:44.976576  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:45.476263  941476 type.go:168] "Request Body" body=""
	I1213 10:35:45.476338  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:45.476639  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:35:45.476717  941476 node_ready.go:55] error getting node "functional-200955" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-200955": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:35:45.976693  941476 type.go:168] "Request Body" body=""
	I1213 10:35:45.976776  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:45.977113  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:46.476938  941476 type.go:168] "Request Body" body=""
	I1213 10:35:46.477014  941476 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-200955" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:35:46.477384  941476 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:35:46.975991  941476 type.go:168] "Request Body" body=""
	I1213 10:35:46.976091  941476 node_ready.go:38] duration metric: took 6m0.000294728s for node "functional-200955" to be "Ready" ...
	I1213 10:35:46.979089  941476 out.go:203] 
	W1213 10:35:46.981875  941476 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1213 10:35:46.981899  941476 out.go:285] * 
	W1213 10:35:46.984058  941476 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 10:35:46.987297  941476 out.go:203] 
	
	
	==> CRI-O <==
	Dec 13 10:35:55 functional-200955 crio[5381]: time="2025-12-13T10:35:55.812081647Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=f2f4c3b0-bfe0-432d-92c1-d6428259751f name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:35:56 functional-200955 crio[5381]: time="2025-12-13T10:35:56.879814239Z" level=info msg="Checking image status: minikube-local-cache-test:functional-200955" id=d082875c-eac0-435e-a97d-6a8a8a566f94 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:35:56 functional-200955 crio[5381]: time="2025-12-13T10:35:56.880023267Z" level=info msg="Resolving \"minikube-local-cache-test\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 13 10:35:56 functional-200955 crio[5381]: time="2025-12-13T10:35:56.880082279Z" level=info msg="Image minikube-local-cache-test:functional-200955 not found" id=d082875c-eac0-435e-a97d-6a8a8a566f94 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:35:56 functional-200955 crio[5381]: time="2025-12-13T10:35:56.880184319Z" level=info msg="Neither image nor artfiact minikube-local-cache-test:functional-200955 found" id=d082875c-eac0-435e-a97d-6a8a8a566f94 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:35:56 functional-200955 crio[5381]: time="2025-12-13T10:35:56.906408418Z" level=info msg="Checking image status: docker.io/library/minikube-local-cache-test:functional-200955" id=9d156be8-dad9-4da7-af6e-b617028f9b26 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:35:56 functional-200955 crio[5381]: time="2025-12-13T10:35:56.906580014Z" level=info msg="Image docker.io/library/minikube-local-cache-test:functional-200955 not found" id=9d156be8-dad9-4da7-af6e-b617028f9b26 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:35:56 functional-200955 crio[5381]: time="2025-12-13T10:35:56.906641676Z" level=info msg="Neither image nor artfiact docker.io/library/minikube-local-cache-test:functional-200955 found" id=9d156be8-dad9-4da7-af6e-b617028f9b26 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:35:56 functional-200955 crio[5381]: time="2025-12-13T10:35:56.930593009Z" level=info msg="Checking image status: localhost/library/minikube-local-cache-test:functional-200955" id=6282e5a5-2317-4a94-86bd-2371e00a7b21 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:35:56 functional-200955 crio[5381]: time="2025-12-13T10:35:56.930756965Z" level=info msg="Image localhost/library/minikube-local-cache-test:functional-200955 not found" id=6282e5a5-2317-4a94-86bd-2371e00a7b21 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:35:56 functional-200955 crio[5381]: time="2025-12-13T10:35:56.930815772Z" level=info msg="Neither image nor artfiact localhost/library/minikube-local-cache-test:functional-200955 found" id=6282e5a5-2317-4a94-86bd-2371e00a7b21 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:35:57 functional-200955 crio[5381]: time="2025-12-13T10:35:57.929707147Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=3ad8d8ae-4a91-4542-8f03-6bcf33113c85 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:35:58 functional-200955 crio[5381]: time="2025-12-13T10:35:58.268997012Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=0a5da476-2f30-4d30-9b78-e331345e7aa0 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:35:58 functional-200955 crio[5381]: time="2025-12-13T10:35:58.269167795Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=0a5da476-2f30-4d30-9b78-e331345e7aa0 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:35:58 functional-200955 crio[5381]: time="2025-12-13T10:35:58.269210208Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=0a5da476-2f30-4d30-9b78-e331345e7aa0 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:35:58 functional-200955 crio[5381]: time="2025-12-13T10:35:58.88396171Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=4f1e871a-ac94-4a6d-aac9-b3d3bd7ad6d6 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:35:58 functional-200955 crio[5381]: time="2025-12-13T10:35:58.884104898Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=4f1e871a-ac94-4a6d-aac9-b3d3bd7ad6d6 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:35:58 functional-200955 crio[5381]: time="2025-12-13T10:35:58.884142765Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=4f1e871a-ac94-4a6d-aac9-b3d3bd7ad6d6 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:35:58 functional-200955 crio[5381]: time="2025-12-13T10:35:58.913583895Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=c3d18fbc-cdd5-4bc8-8f2c-c45bb317a75c name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:35:58 functional-200955 crio[5381]: time="2025-12-13T10:35:58.913736159Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=c3d18fbc-cdd5-4bc8-8f2c-c45bb317a75c name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:35:58 functional-200955 crio[5381]: time="2025-12-13T10:35:58.913773763Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=c3d18fbc-cdd5-4bc8-8f2c-c45bb317a75c name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:35:58 functional-200955 crio[5381]: time="2025-12-13T10:35:58.93940333Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=7990bddd-bcf9-42ae-a236-f1f3f0aff19c name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:35:58 functional-200955 crio[5381]: time="2025-12-13T10:35:58.939569756Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=7990bddd-bcf9-42ae-a236-f1f3f0aff19c name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:35:58 functional-200955 crio[5381]: time="2025-12-13T10:35:58.939610627Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=7990bddd-bcf9-42ae-a236-f1f3f0aff19c name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:35:59 functional-200955 crio[5381]: time="2025-12-13T10:35:59.52627618Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=03c3c44e-c2d0-4965-9033-b96398801e9f name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:36:03.879338    9511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:36:03.880163    9511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:36:03.881910    9511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:36:03.882512    9511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:36:03.884143    9511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec13 10:10] kauditd_printk_skb: 8 callbacks suppressed
	[Dec13 10:11] overlayfs: idmapped layers are currently not supported
	[  +0.076161] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec13 10:17] overlayfs: idmapped layers are currently not supported
	[Dec13 10:18] overlayfs: idmapped layers are currently not supported
	[Dec13 10:35] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 10:36:03 up  5:18,  0 user,  load average: 0.53, 0.39, 0.89
	Linux functional-200955 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 10:36:01 functional-200955 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:36:01 functional-200955 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1156.
	Dec 13 10:36:01 functional-200955 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:36:01 functional-200955 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:36:02 functional-200955 kubelet[9386]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 10:36:02 functional-200955 kubelet[9386]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 10:36:02 functional-200955 kubelet[9386]: E1213 10:36:02.060923    9386 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:36:02 functional-200955 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:36:02 functional-200955 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:36:02 functional-200955 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1157.
	Dec 13 10:36:02 functional-200955 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:36:02 functional-200955 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:36:02 functional-200955 kubelet[9407]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 10:36:02 functional-200955 kubelet[9407]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 10:36:02 functional-200955 kubelet[9407]: E1213 10:36:02.788453    9407 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:36:02 functional-200955 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:36:02 functional-200955 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:36:03 functional-200955 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1158.
	Dec 13 10:36:03 functional-200955 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:36:03 functional-200955 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:36:03 functional-200955 kubelet[9429]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 10:36:03 functional-200955 kubelet[9429]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 10:36:03 functional-200955 kubelet[9429]: E1213 10:36:03.531768    9429 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:36:03 functional-200955 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:36:03 functional-200955 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-200955 -n functional-200955
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-200955 -n functional-200955: exit status 2 (345.88994ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-200955" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (2.47s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (734.29s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-200955 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1213 10:38:37.841443  907484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/addons-054604/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:40:25.732883  907484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-769798/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:41:48.798475  907484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-769798/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:43:37.839896  907484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/addons-054604/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:45:25.733105  907484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-769798/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-200955 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 109 (12m12.181158501s)

                                                
                                                
-- stdout --
	* [functional-200955] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22128
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22128-904040/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-904040/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "functional-200955" primary control-plane node in "functional-200955" cluster
	* Pulling base image v0.0.48-1765275396-22083 ...
	* Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	  - apiserver.enable-admission-plugins=NamespaceAutoProvision
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00023703s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001096364s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001096364s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Related issue: https://github.com/kubernetes/minikube/issues/4172

                                                
                                                
** /stderr **
functional_test.go:774: failed to restart minikube. args "out/minikube-linux-arm64 start -p functional-200955 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 109
functional_test.go:776: restart took 12m12.182336827s for "functional-200955" cluster.
I1213 10:48:17.146218  907484 config.go:182] Loaded profile config "functional-200955": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-200955
helpers_test.go:244: (dbg) docker inspect functional-200955:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8d53cd00da87a9082532fc43ffe108cc46c100c4d52d598de1abc5c03d05f0a2",
	        "Created": "2025-12-13T10:21:24.063231347Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 935996,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T10:21:24.120776444Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/8d53cd00da87a9082532fc43ffe108cc46c100c4d52d598de1abc5c03d05f0a2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8d53cd00da87a9082532fc43ffe108cc46c100c4d52d598de1abc5c03d05f0a2/hostname",
	        "HostsPath": "/var/lib/docker/containers/8d53cd00da87a9082532fc43ffe108cc46c100c4d52d598de1abc5c03d05f0a2/hosts",
	        "LogPath": "/var/lib/docker/containers/8d53cd00da87a9082532fc43ffe108cc46c100c4d52d598de1abc5c03d05f0a2/8d53cd00da87a9082532fc43ffe108cc46c100c4d52d598de1abc5c03d05f0a2-json.log",
	        "Name": "/functional-200955",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-200955:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-200955",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8d53cd00da87a9082532fc43ffe108cc46c100c4d52d598de1abc5c03d05f0a2",
	                "LowerDir": "/var/lib/docker/overlay2/88eeb10fcc1b499e31bc1f6077feaba1c1c5b1a510066e3c5f3c7fab1032a7a8-init/diff:/var/lib/docker/overlay2/ae644fe0cc2841f5eea1cee1fab5fa62406b5368ff2c4f1e7ca42815e94a37ad/diff",
	                "MergedDir": "/var/lib/docker/overlay2/88eeb10fcc1b499e31bc1f6077feaba1c1c5b1a510066e3c5f3c7fab1032a7a8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/88eeb10fcc1b499e31bc1f6077feaba1c1c5b1a510066e3c5f3c7fab1032a7a8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/88eeb10fcc1b499e31bc1f6077feaba1c1c5b1a510066e3c5f3c7fab1032a7a8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-200955",
	                "Source": "/var/lib/docker/volumes/functional-200955/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-200955",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-200955",
	                "name.minikube.sigs.k8s.io": "functional-200955",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "766cddaf684c9eda3444b59c94594c94772112ec8d9beb3bf9ab0dee27a031f7",
	            "SandboxKey": "/var/run/docker/netns/766cddaf684c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33523"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33524"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33527"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33525"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33526"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-200955": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "26:41:8f:b5:13:ba",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "cc1684d1fcbfd40cf35af7d1687322fe1e1f6c4d0d51bbc510daab317bab57d4",
	                    "EndpointID": "480d7cd674d03dbe8a8b029c866cc993844939c5b39aa63c9b0d9188a61c29a3",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-200955",
	                        "8d53cd00da87"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-200955 -n functional-200955
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-200955 -n functional-200955: exit status 2 (312.447329ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-200955 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                       ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-769798 image build -t localhost/my-image:functional-769798 testdata/build --alsologtostderr                                            │ functional-769798 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │ 13 Dec 25 10:21 UTC │
	│ image          │ functional-769798 image ls --format table --alsologtostderr                                                                                       │ functional-769798 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │ 13 Dec 25 10:21 UTC │
	│ update-context │ functional-769798 update-context --alsologtostderr -v=2                                                                                           │ functional-769798 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │ 13 Dec 25 10:21 UTC │
	│ update-context │ functional-769798 update-context --alsologtostderr -v=2                                                                                           │ functional-769798 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │ 13 Dec 25 10:21 UTC │
	│ update-context │ functional-769798 update-context --alsologtostderr -v=2                                                                                           │ functional-769798 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │ 13 Dec 25 10:21 UTC │
	│ image          │ functional-769798 image ls                                                                                                                        │ functional-769798 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │ 13 Dec 25 10:21 UTC │
	│ delete         │ -p functional-769798                                                                                                                              │ functional-769798 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │ 13 Dec 25 10:21 UTC │
	│ start          │ -p functional-200955 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │                     │
	│ start          │ -p functional-200955 --alsologtostderr -v=8                                                                                                       │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:29 UTC │                     │
	│ cache          │ functional-200955 cache add registry.k8s.io/pause:3.1                                                                                             │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:35 UTC │ 13 Dec 25 10:35 UTC │
	│ cache          │ functional-200955 cache add registry.k8s.io/pause:3.3                                                                                             │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:35 UTC │ 13 Dec 25 10:35 UTC │
	│ cache          │ functional-200955 cache add registry.k8s.io/pause:latest                                                                                          │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:35 UTC │ 13 Dec 25 10:35 UTC │
	│ cache          │ functional-200955 cache add minikube-local-cache-test:functional-200955                                                                           │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:35 UTC │ 13 Dec 25 10:35 UTC │
	│ cache          │ functional-200955 cache delete minikube-local-cache-test:functional-200955                                                                        │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:35 UTC │ 13 Dec 25 10:35 UTC │
	│ cache          │ delete registry.k8s.io/pause:3.3                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 10:35 UTC │ 13 Dec 25 10:35 UTC │
	│ cache          │ list                                                                                                                                              │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 10:35 UTC │ 13 Dec 25 10:35 UTC │
	│ ssh            │ functional-200955 ssh sudo crictl images                                                                                                          │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:35 UTC │ 13 Dec 25 10:35 UTC │
	│ ssh            │ functional-200955 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:35 UTC │ 13 Dec 25 10:35 UTC │
	│ ssh            │ functional-200955 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:35 UTC │                     │
	│ cache          │ functional-200955 cache reload                                                                                                                    │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:35 UTC │ 13 Dec 25 10:35 UTC │
	│ ssh            │ functional-200955 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:35 UTC │ 13 Dec 25 10:35 UTC │
	│ cache          │ delete registry.k8s.io/pause:3.1                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 10:35 UTC │ 13 Dec 25 10:35 UTC │
	│ cache          │ delete registry.k8s.io/pause:latest                                                                                                               │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 10:35 UTC │ 13 Dec 25 10:35 UTC │
	│ kubectl        │ functional-200955 kubectl -- --context functional-200955 get pods                                                                                 │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:35 UTC │                     │
	│ start          │ -p functional-200955 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                          │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:36 UTC │                     │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 10:36:05
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 10:36:05.024663  947325 out.go:360] Setting OutFile to fd 1 ...
	I1213 10:36:05.024857  947325 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:36:05.024862  947325 out.go:374] Setting ErrFile to fd 2...
	I1213 10:36:05.024867  947325 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:36:05.025148  947325 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-904040/.minikube/bin
	I1213 10:36:05.025578  947325 out.go:368] Setting JSON to false
	I1213 10:36:05.026512  947325 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":19114,"bootTime":1765603051,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1213 10:36:05.026573  947325 start.go:143] virtualization:  
	I1213 10:36:05.030119  947325 out.go:179] * [functional-200955] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 10:36:05.033180  947325 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 10:36:05.033273  947325 notify.go:221] Checking for updates...
	I1213 10:36:05.036966  947325 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 10:36:05.041647  947325 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22128-904040/kubeconfig
	I1213 10:36:05.044535  947325 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-904040/.minikube
	I1213 10:36:05.047483  947325 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 10:36:05.050413  947325 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 10:36:05.053885  947325 config.go:182] Loaded profile config "functional-200955": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 10:36:05.053982  947325 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 10:36:05.081037  947325 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 10:36:05.081166  947325 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:36:05.151201  947325 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-13 10:36:05.14075062 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:36:05.151307  947325 docker.go:319] overlay module found
	I1213 10:36:05.154359  947325 out.go:179] * Using the docker driver based on existing profile
	I1213 10:36:05.157187  947325 start.go:309] selected driver: docker
	I1213 10:36:05.157194  947325 start.go:927] validating driver "docker" against &{Name:functional-200955 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-200955 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:36:05.157283  947325 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 10:36:05.157388  947325 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:36:05.214971  947325 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-13 10:36:05.204866403 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:36:05.215380  947325 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 10:36:05.215410  947325 cni.go:84] Creating CNI manager for ""
	I1213 10:36:05.215457  947325 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 10:36:05.215500  947325 start.go:353] cluster config:
	{Name:functional-200955 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-200955 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:36:05.218694  947325 out.go:179] * Starting "functional-200955" primary control-plane node in "functional-200955" cluster
	I1213 10:36:05.221699  947325 cache.go:134] Beginning downloading kic base image for docker with crio
	I1213 10:36:05.224563  947325 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 10:36:05.227409  947325 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 10:36:05.227448  947325 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22128-904040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1213 10:36:05.227455  947325 cache.go:65] Caching tarball of preloaded images
	I1213 10:36:05.227491  947325 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 10:36:05.227538  947325 preload.go:238] Found /home/jenkins/minikube-integration/22128-904040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1213 10:36:05.227551  947325 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1213 10:36:05.227666  947325 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/config.json ...
	I1213 10:36:05.247494  947325 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 10:36:05.247505  947325 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 10:36:05.247518  947325 cache.go:243] Successfully downloaded all kic artifacts
	I1213 10:36:05.247549  947325 start.go:360] acquireMachinesLock for functional-200955: {Name:mkc5e96275d9db4dc69c44a1e3c60b6575a1e73a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 10:36:05.247604  947325 start.go:364] duration metric: took 37.317µs to acquireMachinesLock for "functional-200955"
	I1213 10:36:05.247623  947325 start.go:96] Skipping create...Using existing machine configuration
	I1213 10:36:05.247627  947325 fix.go:54] fixHost starting: 
	I1213 10:36:05.247894  947325 cli_runner.go:164] Run: docker container inspect functional-200955 --format={{.State.Status}}
	I1213 10:36:05.265053  947325 fix.go:112] recreateIfNeeded on functional-200955: state=Running err=<nil>
	W1213 10:36:05.265102  947325 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 10:36:05.268458  947325 out.go:252] * Updating the running docker "functional-200955" container ...
	I1213 10:36:05.268485  947325 machine.go:94] provisionDockerMachine start ...
	I1213 10:36:05.268569  947325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-200955
	I1213 10:36:05.285699  947325 main.go:143] libmachine: Using SSH client type: native
	I1213 10:36:05.286021  947325 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33523 <nil> <nil>}
	I1213 10:36:05.286027  947325 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 10:36:05.433614  947325 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-200955
	
	I1213 10:36:05.433628  947325 ubuntu.go:182] provisioning hostname "functional-200955"
	I1213 10:36:05.433698  947325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-200955
	I1213 10:36:05.452166  947325 main.go:143] libmachine: Using SSH client type: native
	I1213 10:36:05.452470  947325 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33523 <nil> <nil>}
	I1213 10:36:05.452478  947325 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-200955 && echo "functional-200955" | sudo tee /etc/hostname
	I1213 10:36:05.611951  947325 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-200955
	
	I1213 10:36:05.612044  947325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-200955
	I1213 10:36:05.630892  947325 main.go:143] libmachine: Using SSH client type: native
	I1213 10:36:05.631191  947325 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33523 <nil> <nil>}
	I1213 10:36:05.631205  947325 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-200955' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-200955/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-200955' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 10:36:05.782771  947325 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 10:36:05.782787  947325 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22128-904040/.minikube CaCertPath:/home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22128-904040/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22128-904040/.minikube}
	I1213 10:36:05.782810  947325 ubuntu.go:190] setting up certificates
	I1213 10:36:05.782824  947325 provision.go:84] configureAuth start
	I1213 10:36:05.782884  947325 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-200955
	I1213 10:36:05.800513  947325 provision.go:143] copyHostCerts
	I1213 10:36:05.800580  947325 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-904040/.minikube/ca.pem, removing ...
	I1213 10:36:05.800588  947325 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-904040/.minikube/ca.pem
	I1213 10:36:05.800662  947325 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22128-904040/.minikube/ca.pem (1082 bytes)
	I1213 10:36:05.800773  947325 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-904040/.minikube/cert.pem, removing ...
	I1213 10:36:05.800777  947325 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-904040/.minikube/cert.pem
	I1213 10:36:05.800802  947325 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22128-904040/.minikube/cert.pem (1123 bytes)
	I1213 10:36:05.800861  947325 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-904040/.minikube/key.pem, removing ...
	I1213 10:36:05.800865  947325 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-904040/.minikube/key.pem
	I1213 10:36:05.800887  947325 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22128-904040/.minikube/key.pem (1675 bytes)
	I1213 10:36:05.800938  947325 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22128-904040/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca-key.pem org=jenkins.functional-200955 san=[127.0.0.1 192.168.49.2 functional-200955 localhost minikube]
	I1213 10:36:06.162765  947325 provision.go:177] copyRemoteCerts
	I1213 10:36:06.162821  947325 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 10:36:06.162864  947325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-200955
	I1213 10:36:06.179964  947325 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33523 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/functional-200955/id_rsa Username:docker}
	I1213 10:36:06.285273  947325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 10:36:06.303138  947325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 10:36:06.321000  947325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 10:36:06.339159  947325 provision.go:87] duration metric: took 556.311814ms to configureAuth
	I1213 10:36:06.339177  947325 ubuntu.go:206] setting minikube options for container-runtime
	I1213 10:36:06.339382  947325 config.go:182] Loaded profile config "functional-200955": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 10:36:06.339492  947325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-200955
	I1213 10:36:06.357323  947325 main.go:143] libmachine: Using SSH client type: native
	I1213 10:36:06.357649  947325 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33523 <nil> <nil>}
	I1213 10:36:06.357662  947325 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 10:36:06.705283  947325 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 10:36:06.705297  947325 machine.go:97] duration metric: took 1.436804594s to provisionDockerMachine
	I1213 10:36:06.705307  947325 start.go:293] postStartSetup for "functional-200955" (driver="docker")
	I1213 10:36:06.705318  947325 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 10:36:06.705379  947325 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 10:36:06.705435  947325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-200955
	I1213 10:36:06.722886  947325 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33523 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/functional-200955/id_rsa Username:docker}
	I1213 10:36:06.829449  947325 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 10:36:06.832816  947325 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 10:36:06.832847  947325 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 10:36:06.832858  947325 filesync.go:126] Scanning /home/jenkins/minikube-integration/22128-904040/.minikube/addons for local assets ...
	I1213 10:36:06.832914  947325 filesync.go:126] Scanning /home/jenkins/minikube-integration/22128-904040/.minikube/files for local assets ...
	I1213 10:36:06.832996  947325 filesync.go:149] local asset: /home/jenkins/minikube-integration/22128-904040/.minikube/files/etc/ssl/certs/9074842.pem -> 9074842.pem in /etc/ssl/certs
	I1213 10:36:06.833088  947325 filesync.go:149] local asset: /home/jenkins/minikube-integration/22128-904040/.minikube/files/etc/test/nested/copy/907484/hosts -> hosts in /etc/test/nested/copy/907484
	I1213 10:36:06.833134  947325 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/907484
	I1213 10:36:06.840686  947325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/files/etc/ssl/certs/9074842.pem --> /etc/ssl/certs/9074842.pem (1708 bytes)
	I1213 10:36:06.859025  947325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/files/etc/test/nested/copy/907484/hosts --> /etc/test/nested/copy/907484/hosts (40 bytes)
	I1213 10:36:06.877717  947325 start.go:296] duration metric: took 172.395592ms for postStartSetup
	I1213 10:36:06.877814  947325 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 10:36:06.877857  947325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-200955
	I1213 10:36:06.896880  947325 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33523 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/functional-200955/id_rsa Username:docker}
	I1213 10:36:06.998897  947325 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 10:36:07.005668  947325 fix.go:56] duration metric: took 1.758032508s for fixHost
	I1213 10:36:07.005685  947325 start.go:83] releasing machines lock for "functional-200955", held for 1.758074248s
	I1213 10:36:07.005790  947325 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-200955
	I1213 10:36:07.024345  947325 ssh_runner.go:195] Run: cat /version.json
	I1213 10:36:07.024397  947325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-200955
	I1213 10:36:07.024410  947325 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 10:36:07.024473  947325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-200955
	I1213 10:36:07.045627  947325 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33523 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/functional-200955/id_rsa Username:docker}
	I1213 10:36:07.056017  947325 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33523 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/functional-200955/id_rsa Username:docker}
	I1213 10:36:07.235962  947325 ssh_runner.go:195] Run: systemctl --version
	I1213 10:36:07.243338  947325 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 10:36:07.293399  947325 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 10:36:07.297828  947325 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 10:36:07.297890  947325 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 10:36:07.305998  947325 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 10:36:07.306012  947325 start.go:496] detecting cgroup driver to use...
	I1213 10:36:07.306043  947325 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 10:36:07.306089  947325 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 10:36:07.321360  947325 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 10:36:07.334818  947325 docker.go:218] disabling cri-docker service (if available) ...
	I1213 10:36:07.334873  947325 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 10:36:07.350268  947325 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 10:36:07.363266  947325 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 10:36:07.482802  947325 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 10:36:07.601250  947325 docker.go:234] disabling docker service ...
	I1213 10:36:07.601314  947325 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 10:36:07.616649  947325 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 10:36:07.630193  947325 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 10:36:07.750803  947325 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 10:36:07.872755  947325 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 10:36:07.885775  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 10:36:07.901044  947325 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 10:36:07.901118  947325 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:36:07.910913  947325 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 10:36:07.910999  947325 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:36:07.920242  947325 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:36:07.929183  947325 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:36:07.938207  947325 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 10:36:07.946601  947325 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:36:07.956231  947325 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:36:07.964904  947325 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:36:07.974470  947325 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 10:36:07.983694  947325 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 10:36:07.992492  947325 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:36:08.121808  947325 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 10:36:08.297420  947325 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 10:36:08.297489  947325 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 10:36:08.301247  947325 start.go:564] Will wait 60s for crictl version
	I1213 10:36:08.301305  947325 ssh_runner.go:195] Run: which crictl
	I1213 10:36:08.304718  947325 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 10:36:08.329152  947325 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1213 10:36:08.329228  947325 ssh_runner.go:195] Run: crio --version
	I1213 10:36:08.358630  947325 ssh_runner.go:195] Run: crio --version
	I1213 10:36:08.393160  947325 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1213 10:36:08.396025  947325 cli_runner.go:164] Run: docker network inspect functional-200955 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 10:36:08.412435  947325 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1213 10:36:08.419349  947325 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1213 10:36:08.422234  947325 kubeadm.go:884] updating cluster {Name:functional-200955 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-200955 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 10:36:08.422367  947325 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 10:36:08.422431  947325 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 10:36:08.457237  947325 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 10:36:08.457249  947325 crio.go:433] Images already preloaded, skipping extraction
	I1213 10:36:08.457306  947325 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 10:36:08.483246  947325 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 10:36:08.483258  947325 cache_images.go:86] Images are preloaded, skipping loading
	I1213 10:36:08.483264  947325 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1213 10:36:08.483360  947325 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-200955 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-200955 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 10:36:08.483446  947325 ssh_runner.go:195] Run: crio config
	I1213 10:36:08.545147  947325 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1213 10:36:08.545173  947325 cni.go:84] Creating CNI manager for ""
	I1213 10:36:08.545183  947325 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 10:36:08.545197  947325 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 10:36:08.545221  947325 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-200955 NodeName:functional-200955 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfig
Opts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 10:36:08.545347  947325 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-200955"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 10:36:08.545423  947325 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 10:36:08.553515  947325 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 10:36:08.553607  947325 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 10:36:08.561293  947325 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1213 10:36:08.574385  947325 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 10:36:08.587429  947325 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2071 bytes)
	I1213 10:36:08.600337  947325 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1213 10:36:08.603994  947325 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:36:08.714374  947325 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 10:36:08.729978  947325 certs.go:69] Setting up /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955 for IP: 192.168.49.2
	I1213 10:36:08.729989  947325 certs.go:195] generating shared ca certs ...
	I1213 10:36:08.730004  947325 certs.go:227] acquiring lock for ca certs: {Name:mk8a4f8a0a31c02fdf751ce601bdbbea6f5a03e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:36:08.730137  947325 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22128-904040/.minikube/ca.key
	I1213 10:36:08.730179  947325 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22128-904040/.minikube/proxy-client-ca.key
	I1213 10:36:08.730184  947325 certs.go:257] generating profile certs ...
	I1213 10:36:08.730263  947325 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/client.key
	I1213 10:36:08.730310  947325 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/apiserver.key.8da389ed
	I1213 10:36:08.730347  947325 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/proxy-client.key
	I1213 10:36:08.730463  947325 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/907484.pem (1338 bytes)
	W1213 10:36:08.730496  947325 certs.go:480] ignoring /home/jenkins/minikube-integration/22128-904040/.minikube/certs/907484_empty.pem, impossibly tiny 0 bytes
	I1213 10:36:08.730503  947325 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 10:36:08.730557  947325 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca.pem (1082 bytes)
	I1213 10:36:08.730581  947325 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/cert.pem (1123 bytes)
	I1213 10:36:08.730604  947325 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/key.pem (1675 bytes)
	I1213 10:36:08.730645  947325 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-904040/.minikube/files/etc/ssl/certs/9074842.pem (1708 bytes)
	I1213 10:36:08.731237  947325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 10:36:08.752034  947325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 10:36:08.773437  947325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 10:36:08.794430  947325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 10:36:08.812223  947325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 10:36:08.829741  947325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 10:36:08.846903  947325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 10:36:08.865036  947325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 10:36:08.883435  947325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 10:36:08.901321  947325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/certs/907484.pem --> /usr/share/ca-certificates/907484.pem (1338 bytes)
	I1213 10:36:08.919555  947325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/files/etc/ssl/certs/9074842.pem --> /usr/share/ca-certificates/9074842.pem (1708 bytes)
	I1213 10:36:08.937123  947325 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 10:36:08.950079  947325 ssh_runner.go:195] Run: openssl version
	I1213 10:36:08.956456  947325 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9074842.pem
	I1213 10:36:08.964062  947325 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9074842.pem /etc/ssl/certs/9074842.pem
	I1213 10:36:08.971445  947325 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9074842.pem
	I1213 10:36:08.975220  947325 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 10:21 /usr/share/ca-certificates/9074842.pem
	I1213 10:36:08.975278  947325 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9074842.pem
	I1213 10:36:09.016546  947325 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 10:36:09.024284  947325 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:36:09.031776  947325 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 10:36:09.039308  947325 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:36:09.042991  947325 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 10:11 /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:36:09.043047  947325 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:36:09.084141  947325 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 10:36:09.091531  947325 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/907484.pem
	I1213 10:36:09.098770  947325 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/907484.pem /etc/ssl/certs/907484.pem
	I1213 10:36:09.106212  947325 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/907484.pem
	I1213 10:36:09.109989  947325 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 10:21 /usr/share/ca-certificates/907484.pem
	I1213 10:36:09.110044  947325 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/907484.pem
	I1213 10:36:09.153254  947325 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 10:36:09.160715  947325 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 10:36:09.164506  947325 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 10:36:09.205710  947325 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 10:36:09.247436  947325 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 10:36:09.288348  947325 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 10:36:09.331611  947325 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 10:36:09.374582  947325 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 10:36:09.417486  947325 kubeadm.go:401] StartCluster: {Name:functional-200955 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-200955 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:36:09.417589  947325 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 10:36:09.417682  947325 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 10:36:09.449632  947325 cri.go:89] found id: ""
	I1213 10:36:09.449706  947325 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 10:36:09.457511  947325 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 10:36:09.457521  947325 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 10:36:09.457596  947325 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 10:36:09.465280  947325 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 10:36:09.465840  947325 kubeconfig.go:125] found "functional-200955" server: "https://192.168.49.2:8441"
	I1213 10:36:09.467296  947325 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 10:36:09.475528  947325 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-13 10:21:33.398300096 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-13 10:36:08.597035311 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1213 10:36:09.475546  947325 kubeadm.go:1161] stopping kube-system containers ...
	I1213 10:36:09.475557  947325 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1213 10:36:09.475616  947325 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 10:36:09.507924  947325 cri.go:89] found id: ""
	I1213 10:36:09.508000  947325 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1213 10:36:09.528470  947325 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 10:36:09.536474  947325 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Dec 13 10:25 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5640 Dec 13 10:25 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec 13 10:25 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec 13 10:25 /etc/kubernetes/scheduler.conf
	
	I1213 10:36:09.536539  947325 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1213 10:36:09.544588  947325 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1213 10:36:09.552476  947325 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1213 10:36:09.552532  947325 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 10:36:09.560285  947325 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1213 10:36:09.567834  947325 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1213 10:36:09.567887  947325 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 10:36:09.575592  947325 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1213 10:36:09.583902  947325 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1213 10:36:09.583961  947325 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 10:36:09.591566  947325 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 10:36:09.599534  947325 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 10:36:09.647986  947325 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 10:36:11.096705  947325 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.44869318s)
	I1213 10:36:11.096768  947325 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1213 10:36:11.325396  947325 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 10:36:11.390971  947325 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1213 10:36:11.438539  947325 api_server.go:52] waiting for apiserver process to appear ...
	I1213 10:36:11.438613  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:11.939787  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:12.439662  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:12.939059  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:13.439009  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:13.939132  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:14.438804  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:14.939015  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:15.439388  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:15.939371  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:16.439364  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:16.939242  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:17.438810  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:17.938842  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:18.439574  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:18.939403  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:19.438808  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:19.938978  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:20.438838  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:20.938801  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:21.439702  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:21.938786  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:22.438810  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:22.938804  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:23.438760  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:23.939498  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:24.438837  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:24.939362  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:25.439492  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:25.939539  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:26.439316  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:26.939385  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:27.438813  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:27.938714  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:28.439704  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:28.938715  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:29.438702  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:29.938870  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:30.439316  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:30.939369  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:31.438789  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:31.938728  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:32.439400  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:32.938824  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:33.438805  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:33.938821  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:34.439650  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:34.939567  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:35.439266  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:35.938806  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:36.439740  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:36.938910  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:37.439180  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:37.939269  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:38.439067  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:38.938814  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:39.438987  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:39.939068  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:40.439485  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:40.939755  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:41.439569  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:41.939359  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:42.438799  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:42.939523  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:43.438794  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:43.939463  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:44.439348  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:44.938832  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:45.439512  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:45.939368  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:46.439415  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:46.938802  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:47.439348  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:47.938774  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:48.439499  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:48.938765  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:49.438815  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:49.939489  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:50.439425  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:50.938961  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:51.438899  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:51.938980  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:52.438768  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:52.939488  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:53.438802  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:53.938784  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:54.439567  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:54.939001  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:55.439034  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:55.939017  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:56.438830  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:56.938984  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:57.438956  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:57.939727  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:58.439422  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:58.938982  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:59.438715  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:59.938800  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:00.439480  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:00.939557  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:01.439633  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:01.938752  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:02.438782  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:02.939460  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:03.439509  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:03.939666  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:04.438756  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:04.938745  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:05.438791  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:05.938960  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:06.439641  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:06.939760  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:07.438893  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:07.939464  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:08.438808  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:08.938829  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:09.438860  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:09.939094  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:10.439786  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:10.939776  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:11.439694  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:37:11.439774  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:37:11.465379  947325 cri.go:89] found id: ""
	I1213 10:37:11.465394  947325 logs.go:282] 0 containers: []
	W1213 10:37:11.465401  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:37:11.465406  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:37:11.465463  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:37:11.498722  947325 cri.go:89] found id: ""
	I1213 10:37:11.498736  947325 logs.go:282] 0 containers: []
	W1213 10:37:11.498744  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:37:11.498749  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:37:11.498808  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:37:11.528435  947325 cri.go:89] found id: ""
	I1213 10:37:11.528450  947325 logs.go:282] 0 containers: []
	W1213 10:37:11.528456  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:37:11.528461  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:37:11.528520  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:37:11.556412  947325 cri.go:89] found id: ""
	I1213 10:37:11.556428  947325 logs.go:282] 0 containers: []
	W1213 10:37:11.556435  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:37:11.556439  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:37:11.556495  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:37:11.582029  947325 cri.go:89] found id: ""
	I1213 10:37:11.582043  947325 logs.go:282] 0 containers: []
	W1213 10:37:11.582050  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:37:11.582055  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:37:11.582111  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:37:11.606900  947325 cri.go:89] found id: ""
	I1213 10:37:11.606914  947325 logs.go:282] 0 containers: []
	W1213 10:37:11.606921  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:37:11.606926  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:37:11.606995  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:37:11.631834  947325 cri.go:89] found id: ""
	I1213 10:37:11.631848  947325 logs.go:282] 0 containers: []
	W1213 10:37:11.631855  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:37:11.631863  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:37:11.631873  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:37:11.696990  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:37:11.697011  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:37:11.711905  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:37:11.711923  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:37:11.780498  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:37:11.772620   10987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:11.773404   10987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:11.774929   10987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:11.775464   10987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:11.776559   10987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:37:11.772620   10987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:11.773404   10987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:11.774929   10987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:11.775464   10987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:11.776559   10987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:37:11.780514  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:37:11.780525  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:37:11.849149  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:37:11.849169  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:37:14.380275  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:14.390300  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:37:14.390376  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:37:14.422358  947325 cri.go:89] found id: ""
	I1213 10:37:14.422408  947325 logs.go:282] 0 containers: []
	W1213 10:37:14.422434  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:37:14.422439  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:37:14.422577  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:37:14.449364  947325 cri.go:89] found id: ""
	I1213 10:37:14.449379  947325 logs.go:282] 0 containers: []
	W1213 10:37:14.449386  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:37:14.449391  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:37:14.449448  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:37:14.478529  947325 cri.go:89] found id: ""
	I1213 10:37:14.478543  947325 logs.go:282] 0 containers: []
	W1213 10:37:14.478550  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:37:14.478555  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:37:14.478612  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:37:14.521652  947325 cri.go:89] found id: ""
	I1213 10:37:14.521666  947325 logs.go:282] 0 containers: []
	W1213 10:37:14.521673  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:37:14.521678  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:37:14.521736  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:37:14.558521  947325 cri.go:89] found id: ""
	I1213 10:37:14.558535  947325 logs.go:282] 0 containers: []
	W1213 10:37:14.558542  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:37:14.558547  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:37:14.558605  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:37:14.582435  947325 cri.go:89] found id: ""
	I1213 10:37:14.582448  947325 logs.go:282] 0 containers: []
	W1213 10:37:14.582455  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:37:14.582461  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:37:14.582518  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:37:14.607776  947325 cri.go:89] found id: ""
	I1213 10:37:14.607791  947325 logs.go:282] 0 containers: []
	W1213 10:37:14.607799  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:37:14.607807  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:37:14.607816  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:37:14.673008  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:37:14.673028  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:37:14.688569  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:37:14.688585  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:37:14.753510  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:37:14.744939   11090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:14.745653   11090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:14.747326   11090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:14.747936   11090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:14.749524   11090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:37:14.744939   11090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:14.745653   11090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:14.747326   11090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:14.747936   11090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:14.749524   11090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:37:14.753524  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:37:14.753556  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:37:14.820848  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:37:14.820868  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:37:17.353563  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:17.363824  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:37:17.363887  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:37:17.393249  947325 cri.go:89] found id: ""
	I1213 10:37:17.393263  947325 logs.go:282] 0 containers: []
	W1213 10:37:17.393271  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:37:17.393275  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:37:17.393334  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:37:17.421143  947325 cri.go:89] found id: ""
	I1213 10:37:17.421157  947325 logs.go:282] 0 containers: []
	W1213 10:37:17.421164  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:37:17.421169  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:37:17.421226  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:37:17.445347  947325 cri.go:89] found id: ""
	I1213 10:37:17.445361  947325 logs.go:282] 0 containers: []
	W1213 10:37:17.445368  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:37:17.445372  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:37:17.445428  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:37:17.474380  947325 cri.go:89] found id: ""
	I1213 10:37:17.474406  947325 logs.go:282] 0 containers: []
	W1213 10:37:17.474413  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:37:17.474419  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:37:17.474502  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:37:17.510146  947325 cri.go:89] found id: ""
	I1213 10:37:17.510160  947325 logs.go:282] 0 containers: []
	W1213 10:37:17.510167  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:37:17.510172  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:37:17.510228  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:37:17.544872  947325 cri.go:89] found id: ""
	I1213 10:37:17.544897  947325 logs.go:282] 0 containers: []
	W1213 10:37:17.544911  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:37:17.544917  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:37:17.544987  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:37:17.570527  947325 cri.go:89] found id: ""
	I1213 10:37:17.570542  947325 logs.go:282] 0 containers: []
	W1213 10:37:17.570549  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:37:17.570556  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:37:17.570567  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:37:17.634904  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:37:17.634924  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:37:17.649198  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:37:17.649216  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:37:17.710891  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:37:17.702777   11196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:17.703254   11196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:17.704864   11196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:17.705195   11196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:17.706621   11196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:37:17.702777   11196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:17.703254   11196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:17.704864   11196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:17.705195   11196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:17.706621   11196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:37:17.710910  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:37:17.710921  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:37:17.779540  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:37:17.779561  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:37:20.315323  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:20.326110  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:37:20.326185  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:37:20.351285  947325 cri.go:89] found id: ""
	I1213 10:37:20.351299  947325 logs.go:282] 0 containers: []
	W1213 10:37:20.351307  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:37:20.351312  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:37:20.351381  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:37:20.377322  947325 cri.go:89] found id: ""
	I1213 10:37:20.377335  947325 logs.go:282] 0 containers: []
	W1213 10:37:20.377343  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:37:20.377352  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:37:20.377413  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:37:20.403676  947325 cri.go:89] found id: ""
	I1213 10:37:20.403691  947325 logs.go:282] 0 containers: []
	W1213 10:37:20.403698  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:37:20.403704  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:37:20.403766  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:37:20.433713  947325 cri.go:89] found id: ""
	I1213 10:37:20.433736  947325 logs.go:282] 0 containers: []
	W1213 10:37:20.433744  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:37:20.433749  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:37:20.433809  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:37:20.459243  947325 cri.go:89] found id: ""
	I1213 10:37:20.459258  947325 logs.go:282] 0 containers: []
	W1213 10:37:20.459265  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:37:20.459270  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:37:20.459328  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:37:20.505295  947325 cri.go:89] found id: ""
	I1213 10:37:20.505310  947325 logs.go:282] 0 containers: []
	W1213 10:37:20.505317  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:37:20.505322  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:37:20.505382  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:37:20.534487  947325 cri.go:89] found id: ""
	I1213 10:37:20.534502  947325 logs.go:282] 0 containers: []
	W1213 10:37:20.534510  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:37:20.534518  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:37:20.534529  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:37:20.562816  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:37:20.562833  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:37:20.626774  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:37:20.626798  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:37:20.642510  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:37:20.642526  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:37:20.716150  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:37:20.707015   11315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:20.707754   11315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:20.709473   11315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:20.710077   11315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:20.711566   11315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:37:20.707015   11315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:20.707754   11315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:20.709473   11315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:20.710077   11315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:20.711566   11315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:37:20.716164  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:37:20.716176  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:37:23.288286  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:23.298705  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:37:23.298766  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:37:23.333024  947325 cri.go:89] found id: ""
	I1213 10:37:23.333038  947325 logs.go:282] 0 containers: []
	W1213 10:37:23.333046  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:37:23.333051  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:37:23.333115  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:37:23.358903  947325 cri.go:89] found id: ""
	I1213 10:37:23.358916  947325 logs.go:282] 0 containers: []
	W1213 10:37:23.358924  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:37:23.358929  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:37:23.358989  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:37:23.384787  947325 cri.go:89] found id: ""
	I1213 10:37:23.384801  947325 logs.go:282] 0 containers: []
	W1213 10:37:23.384808  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:37:23.384812  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:37:23.384871  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:37:23.410002  947325 cri.go:89] found id: ""
	I1213 10:37:23.410036  947325 logs.go:282] 0 containers: []
	W1213 10:37:23.410061  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:37:23.410086  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:37:23.410150  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:37:23.434837  947325 cri.go:89] found id: ""
	I1213 10:37:23.434865  947325 logs.go:282] 0 containers: []
	W1213 10:37:23.434872  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:37:23.434878  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:37:23.434945  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:37:23.464375  947325 cri.go:89] found id: ""
	I1213 10:37:23.464389  947325 logs.go:282] 0 containers: []
	W1213 10:37:23.464396  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:37:23.464402  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:37:23.464472  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:37:23.506074  947325 cri.go:89] found id: ""
	I1213 10:37:23.506089  947325 logs.go:282] 0 containers: []
	W1213 10:37:23.506097  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:37:23.506104  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:37:23.506116  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:37:23.589169  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:37:23.589191  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:37:23.619461  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:37:23.619477  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:37:23.688698  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:37:23.688720  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:37:23.703620  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:37:23.703637  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:37:23.771897  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:37:23.763311   11423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:23.763984   11423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:23.765659   11423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:23.766138   11423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:23.767919   11423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:37:23.763311   11423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:23.763984   11423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:23.765659   11423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:23.766138   11423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:23.767919   11423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:37:26.272169  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:26.282101  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:37:26.282172  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:37:26.308055  947325 cri.go:89] found id: ""
	I1213 10:37:26.308071  947325 logs.go:282] 0 containers: []
	W1213 10:37:26.308078  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:37:26.308086  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:37:26.308147  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:37:26.334700  947325 cri.go:89] found id: ""
	I1213 10:37:26.334722  947325 logs.go:282] 0 containers: []
	W1213 10:37:26.334729  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:37:26.334735  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:37:26.334799  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:37:26.360726  947325 cri.go:89] found id: ""
	I1213 10:37:26.360749  947325 logs.go:282] 0 containers: []
	W1213 10:37:26.360758  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:37:26.360763  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:37:26.360830  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:37:26.385135  947325 cri.go:89] found id: ""
	I1213 10:37:26.385149  947325 logs.go:282] 0 containers: []
	W1213 10:37:26.385157  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:37:26.385162  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:37:26.385233  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:37:26.412837  947325 cri.go:89] found id: ""
	I1213 10:37:26.412851  947325 logs.go:282] 0 containers: []
	W1213 10:37:26.412858  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:37:26.412863  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:37:26.412942  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:37:26.437812  947325 cri.go:89] found id: ""
	I1213 10:37:26.437827  947325 logs.go:282] 0 containers: []
	W1213 10:37:26.437834  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:37:26.437839  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:37:26.437900  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:37:26.463570  947325 cri.go:89] found id: ""
	I1213 10:37:26.463584  947325 logs.go:282] 0 containers: []
	W1213 10:37:26.463592  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:37:26.463600  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:37:26.463611  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:37:26.534802  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:37:26.534823  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:37:26.550643  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:37:26.550658  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:37:26.612829  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:37:26.605210   11516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:26.605795   11516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:26.606999   11516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:26.607456   11516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:26.609002   11516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:37:26.605210   11516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:26.605795   11516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:26.606999   11516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:26.607456   11516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:26.609002   11516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:37:26.612839  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:37:26.612849  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:37:26.681461  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:37:26.681480  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:37:29.210709  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:29.221193  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:37:29.221255  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:37:29.249274  947325 cri.go:89] found id: ""
	I1213 10:37:29.249289  947325 logs.go:282] 0 containers: []
	W1213 10:37:29.249297  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:37:29.249301  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:37:29.249369  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:37:29.276685  947325 cri.go:89] found id: ""
	I1213 10:37:29.276709  947325 logs.go:282] 0 containers: []
	W1213 10:37:29.276718  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:37:29.276723  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:37:29.276788  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:37:29.303267  947325 cri.go:89] found id: ""
	I1213 10:37:29.303281  947325 logs.go:282] 0 containers: []
	W1213 10:37:29.303289  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:37:29.303294  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:37:29.303355  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:37:29.328158  947325 cri.go:89] found id: ""
	I1213 10:37:29.328173  947325 logs.go:282] 0 containers: []
	W1213 10:37:29.328180  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:37:29.328186  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:37:29.328244  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:37:29.355541  947325 cri.go:89] found id: ""
	I1213 10:37:29.355556  947325 logs.go:282] 0 containers: []
	W1213 10:37:29.355565  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:37:29.355570  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:37:29.355627  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:37:29.381411  947325 cri.go:89] found id: ""
	I1213 10:37:29.381426  947325 logs.go:282] 0 containers: []
	W1213 10:37:29.381433  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:37:29.381439  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:37:29.381501  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:37:29.407073  947325 cri.go:89] found id: ""
	I1213 10:37:29.407088  947325 logs.go:282] 0 containers: []
	W1213 10:37:29.407094  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:37:29.407101  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:37:29.407113  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:37:29.422330  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:37:29.422347  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:37:29.498825  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:37:29.490027   11614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:29.491071   11614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:29.492766   11614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:29.493102   11614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:29.494590   11614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:37:29.490027   11614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:29.491071   11614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:29.492766   11614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:29.493102   11614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:29.494590   11614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:37:29.498837  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:37:29.498850  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:37:29.575835  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:37:29.575856  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:37:29.607770  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:37:29.607790  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:37:32.181248  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:32.191812  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:37:32.191876  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:37:32.217211  947325 cri.go:89] found id: ""
	I1213 10:37:32.217225  947325 logs.go:282] 0 containers: []
	W1213 10:37:32.217233  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:37:32.217238  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:37:32.217293  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:37:32.243073  947325 cri.go:89] found id: ""
	I1213 10:37:32.243087  947325 logs.go:282] 0 containers: []
	W1213 10:37:32.243095  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:37:32.243100  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:37:32.243172  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:37:32.272999  947325 cri.go:89] found id: ""
	I1213 10:37:32.273013  947325 logs.go:282] 0 containers: []
	W1213 10:37:32.273020  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:37:32.273025  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:37:32.273084  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:37:32.299079  947325 cri.go:89] found id: ""
	I1213 10:37:32.299092  947325 logs.go:282] 0 containers: []
	W1213 10:37:32.299099  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:37:32.299104  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:37:32.299161  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:37:32.328707  947325 cri.go:89] found id: ""
	I1213 10:37:32.328722  947325 logs.go:282] 0 containers: []
	W1213 10:37:32.328729  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:37:32.328734  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:37:32.328795  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:37:32.354361  947325 cri.go:89] found id: ""
	I1213 10:37:32.354375  947325 logs.go:282] 0 containers: []
	W1213 10:37:32.354382  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:37:32.354388  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:37:32.354448  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:37:32.380069  947325 cri.go:89] found id: ""
	I1213 10:37:32.380083  947325 logs.go:282] 0 containers: []
	W1213 10:37:32.380089  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:37:32.380096  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:37:32.380107  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:37:32.445012  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:37:32.445036  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:37:32.460199  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:37:32.460223  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:37:32.549445  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:37:32.540188   11723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:32.540702   11723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:32.542738   11723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:32.543594   11723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:32.545344   11723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:37:32.540188   11723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:32.540702   11723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:32.542738   11723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:32.543594   11723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:32.545344   11723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:37:32.549456  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:37:32.549467  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:37:32.617595  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:37:32.617617  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:37:35.148911  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:35.159421  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:37:35.159482  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:37:35.186963  947325 cri.go:89] found id: ""
	I1213 10:37:35.186976  947325 logs.go:282] 0 containers: []
	W1213 10:37:35.186984  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:37:35.186989  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:37:35.187046  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:37:35.216128  947325 cri.go:89] found id: ""
	I1213 10:37:35.216142  947325 logs.go:282] 0 containers: []
	W1213 10:37:35.216153  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:37:35.216158  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:37:35.216217  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:37:35.244930  947325 cri.go:89] found id: ""
	I1213 10:37:35.244945  947325 logs.go:282] 0 containers: []
	W1213 10:37:35.244953  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:37:35.244958  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:37:35.245020  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:37:35.270186  947325 cri.go:89] found id: ""
	I1213 10:37:35.270200  947325 logs.go:282] 0 containers: []
	W1213 10:37:35.270207  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:37:35.270212  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:37:35.270268  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:37:35.296166  947325 cri.go:89] found id: ""
	I1213 10:37:35.296180  947325 logs.go:282] 0 containers: []
	W1213 10:37:35.296187  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:37:35.296192  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:37:35.296249  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:37:35.325322  947325 cri.go:89] found id: ""
	I1213 10:37:35.325337  947325 logs.go:282] 0 containers: []
	W1213 10:37:35.325344  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:37:35.325349  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:37:35.325411  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:37:35.350870  947325 cri.go:89] found id: ""
	I1213 10:37:35.350884  947325 logs.go:282] 0 containers: []
	W1213 10:37:35.350892  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:37:35.350900  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:37:35.350911  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:37:35.365840  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:37:35.365857  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:37:35.428973  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:37:35.420649   11826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:35.421481   11826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:35.422989   11826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:35.423556   11826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:35.425084   11826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:37:35.420649   11826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:35.421481   11826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:35.422989   11826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:35.423556   11826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:35.425084   11826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:37:35.428993  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:37:35.429004  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:37:35.497503  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:37:35.497522  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:37:35.530732  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:37:35.530751  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:37:38.099975  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:38.110243  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:37:38.110306  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:37:38.140777  947325 cri.go:89] found id: ""
	I1213 10:37:38.140792  947325 logs.go:282] 0 containers: []
	W1213 10:37:38.140798  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:37:38.140804  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:37:38.140871  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:37:38.167186  947325 cri.go:89] found id: ""
	I1213 10:37:38.167200  947325 logs.go:282] 0 containers: []
	W1213 10:37:38.167207  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:37:38.167212  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:37:38.167276  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:37:38.193305  947325 cri.go:89] found id: ""
	I1213 10:37:38.193318  947325 logs.go:282] 0 containers: []
	W1213 10:37:38.193326  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:37:38.193331  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:37:38.193388  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:37:38.219451  947325 cri.go:89] found id: ""
	I1213 10:37:38.219464  947325 logs.go:282] 0 containers: []
	W1213 10:37:38.219472  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:37:38.219477  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:37:38.219542  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:37:38.249284  947325 cri.go:89] found id: ""
	I1213 10:37:38.249299  947325 logs.go:282] 0 containers: []
	W1213 10:37:38.249306  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:37:38.249311  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:37:38.249380  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:37:38.275451  947325 cri.go:89] found id: ""
	I1213 10:37:38.275464  947325 logs.go:282] 0 containers: []
	W1213 10:37:38.275471  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:37:38.275477  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:37:38.275538  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:37:38.300476  947325 cri.go:89] found id: ""
	I1213 10:37:38.300490  947325 logs.go:282] 0 containers: []
	W1213 10:37:38.300497  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:37:38.300504  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:37:38.300517  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:37:38.366681  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:37:38.366700  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:37:38.381405  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:37:38.381423  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:37:38.441215  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:37:38.434082   11932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:38.434551   11932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:38.435674   11932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:38.436023   11932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:38.437450   11932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:37:38.434082   11932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:38.434551   11932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:38.435674   11932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:38.436023   11932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:38.437450   11932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:37:38.441225  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:37:38.441236  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:37:38.508504  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:37:38.508525  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:37:41.051455  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:41.061451  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:37:41.061522  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:37:41.087312  947325 cri.go:89] found id: ""
	I1213 10:37:41.087331  947325 logs.go:282] 0 containers: []
	W1213 10:37:41.087338  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:37:41.087343  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:37:41.087416  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:37:41.116231  947325 cri.go:89] found id: ""
	I1213 10:37:41.116246  947325 logs.go:282] 0 containers: []
	W1213 10:37:41.116253  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:37:41.116258  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:37:41.116316  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:37:41.147429  947325 cri.go:89] found id: ""
	I1213 10:37:41.147444  947325 logs.go:282] 0 containers: []
	W1213 10:37:41.147451  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:37:41.147457  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:37:41.147516  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:37:41.176551  947325 cri.go:89] found id: ""
	I1213 10:37:41.176565  947325 logs.go:282] 0 containers: []
	W1213 10:37:41.176573  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:37:41.176578  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:37:41.176634  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:37:41.204132  947325 cri.go:89] found id: ""
	I1213 10:37:41.204146  947325 logs.go:282] 0 containers: []
	W1213 10:37:41.204154  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:37:41.204159  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:37:41.204223  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:37:41.230785  947325 cri.go:89] found id: ""
	I1213 10:37:41.230799  947325 logs.go:282] 0 containers: []
	W1213 10:37:41.230807  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:37:41.230813  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:37:41.230880  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:37:41.256411  947325 cri.go:89] found id: ""
	I1213 10:37:41.256425  947325 logs.go:282] 0 containers: []
	W1213 10:37:41.256433  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:37:41.256440  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:37:41.256451  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:37:41.285617  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:37:41.285636  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:37:41.356895  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:37:41.356914  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:37:41.371698  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:37:41.371714  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:37:41.436289  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:37:41.427612   12049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:41.428221   12049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:41.430007   12049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:41.430584   12049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:41.432351   12049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:37:41.427612   12049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:41.428221   12049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:41.430007   12049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:41.430584   12049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:41.432351   12049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:37:41.436299  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:37:41.436309  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:37:44.006670  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:44.021718  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:37:44.021788  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:37:44.048534  947325 cri.go:89] found id: ""
	I1213 10:37:44.048549  947325 logs.go:282] 0 containers: []
	W1213 10:37:44.048565  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:37:44.048571  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:37:44.048674  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:37:44.079425  947325 cri.go:89] found id: ""
	I1213 10:37:44.079439  947325 logs.go:282] 0 containers: []
	W1213 10:37:44.079446  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:37:44.079451  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:37:44.079523  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:37:44.106317  947325 cri.go:89] found id: ""
	I1213 10:37:44.106334  947325 logs.go:282] 0 containers: []
	W1213 10:37:44.106342  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:37:44.106348  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:37:44.106420  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:37:44.132520  947325 cri.go:89] found id: ""
	I1213 10:37:44.132534  947325 logs.go:282] 0 containers: []
	W1213 10:37:44.132553  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:37:44.132558  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:37:44.132628  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:37:44.161205  947325 cri.go:89] found id: ""
	I1213 10:37:44.161219  947325 logs.go:282] 0 containers: []
	W1213 10:37:44.161226  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:37:44.161231  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:37:44.161291  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:37:44.187876  947325 cri.go:89] found id: ""
	I1213 10:37:44.187890  947325 logs.go:282] 0 containers: []
	W1213 10:37:44.187898  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:37:44.187903  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:37:44.187961  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:37:44.215854  947325 cri.go:89] found id: ""
	I1213 10:37:44.215869  947325 logs.go:282] 0 containers: []
	W1213 10:37:44.215876  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:37:44.215884  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:37:44.215894  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:37:44.284854  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:37:44.276025   12136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:44.276798   12136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:44.278330   12136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:44.278909   12136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:44.280546   12136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:37:44.276025   12136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:44.276798   12136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:44.278330   12136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:44.278909   12136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:44.280546   12136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:37:44.284866  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:37:44.284876  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:37:44.355349  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:37:44.355373  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:37:44.384733  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:37:44.384752  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:37:44.453769  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:37:44.453788  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:37:46.969736  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:46.979972  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:37:46.980038  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:37:47.007061  947325 cri.go:89] found id: ""
	I1213 10:37:47.007075  947325 logs.go:282] 0 containers: []
	W1213 10:37:47.007082  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:37:47.007087  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:37:47.007146  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:37:47.036818  947325 cri.go:89] found id: ""
	I1213 10:37:47.036832  947325 logs.go:282] 0 containers: []
	W1213 10:37:47.036858  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:37:47.036863  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:37:47.036921  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:37:47.061328  947325 cri.go:89] found id: ""
	I1213 10:37:47.061342  947325 logs.go:282] 0 containers: []
	W1213 10:37:47.061349  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:37:47.061355  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:37:47.061415  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:37:47.089017  947325 cri.go:89] found id: ""
	I1213 10:37:47.089032  947325 logs.go:282] 0 containers: []
	W1213 10:37:47.089039  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:37:47.089044  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:37:47.089103  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:37:47.114790  947325 cri.go:89] found id: ""
	I1213 10:37:47.114803  947325 logs.go:282] 0 containers: []
	W1213 10:37:47.114810  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:37:47.114817  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:37:47.114877  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:37:47.139554  947325 cri.go:89] found id: ""
	I1213 10:37:47.139575  947325 logs.go:282] 0 containers: []
	W1213 10:37:47.139583  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:37:47.139589  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:37:47.139654  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:37:47.165228  947325 cri.go:89] found id: ""
	I1213 10:37:47.165241  947325 logs.go:282] 0 containers: []
	W1213 10:37:47.165248  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:37:47.165256  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:37:47.165266  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:37:47.232293  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:37:47.232313  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:37:47.261718  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:37:47.261736  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:37:47.331592  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:37:47.331613  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:37:47.345881  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:37:47.345897  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:37:47.412948  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:37:47.404477   12259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:47.405216   12259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:47.406839   12259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:47.407332   12259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:47.409008   12259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:37:47.404477   12259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:47.405216   12259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:47.406839   12259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:47.407332   12259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:47.409008   12259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:37:49.913659  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:49.923942  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:37:49.924005  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:37:49.951850  947325 cri.go:89] found id: ""
	I1213 10:37:49.951863  947325 logs.go:282] 0 containers: []
	W1213 10:37:49.951871  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:37:49.951876  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:37:49.951936  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:37:49.976949  947325 cri.go:89] found id: ""
	I1213 10:37:49.976963  947325 logs.go:282] 0 containers: []
	W1213 10:37:49.976971  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:37:49.976976  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:37:49.977034  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:37:50.020670  947325 cri.go:89] found id: ""
	I1213 10:37:50.020686  947325 logs.go:282] 0 containers: []
	W1213 10:37:50.020693  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:37:50.020698  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:37:50.020779  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:37:50.048299  947325 cri.go:89] found id: ""
	I1213 10:37:50.048316  947325 logs.go:282] 0 containers: []
	W1213 10:37:50.048323  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:37:50.048328  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:37:50.048397  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:37:50.075060  947325 cri.go:89] found id: ""
	I1213 10:37:50.075074  947325 logs.go:282] 0 containers: []
	W1213 10:37:50.075081  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:37:50.075087  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:37:50.075148  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:37:50.104579  947325 cri.go:89] found id: ""
	I1213 10:37:50.104593  947325 logs.go:282] 0 containers: []
	W1213 10:37:50.104601  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:37:50.104607  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:37:50.104666  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:37:50.132679  947325 cri.go:89] found id: ""
	I1213 10:37:50.132693  947325 logs.go:282] 0 containers: []
	W1213 10:37:50.132701  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:37:50.132714  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:37:50.132725  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:37:50.197209  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:37:50.187857   12346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:50.188686   12346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:50.190498   12346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:50.191212   12346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:50.192792   12346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:37:50.187857   12346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:50.188686   12346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:50.190498   12346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:50.191212   12346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:50.192792   12346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:37:50.197219  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:37:50.197230  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:37:50.267157  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:37:50.267176  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:37:50.297061  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:37:50.297077  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:37:50.363929  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:37:50.363950  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:37:52.879245  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:52.889673  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:37:52.889741  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:37:52.914746  947325 cri.go:89] found id: ""
	I1213 10:37:52.914768  947325 logs.go:282] 0 containers: []
	W1213 10:37:52.914776  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:37:52.914781  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:37:52.914845  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:37:52.941523  947325 cri.go:89] found id: ""
	I1213 10:37:52.941554  947325 logs.go:282] 0 containers: []
	W1213 10:37:52.941562  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:37:52.941567  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:37:52.941623  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:37:52.967010  947325 cri.go:89] found id: ""
	I1213 10:37:52.967027  947325 logs.go:282] 0 containers: []
	W1213 10:37:52.967035  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:37:52.967040  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:37:52.967141  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:37:52.992300  947325 cri.go:89] found id: ""
	I1213 10:37:52.992313  947325 logs.go:282] 0 containers: []
	W1213 10:37:52.992321  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:37:52.992326  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:37:52.992386  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:37:53.020045  947325 cri.go:89] found id: ""
	I1213 10:37:53.020058  947325 logs.go:282] 0 containers: []
	W1213 10:37:53.020074  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:37:53.020081  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:37:53.020140  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:37:53.053897  947325 cri.go:89] found id: ""
	I1213 10:37:53.053911  947325 logs.go:282] 0 containers: []
	W1213 10:37:53.053918  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:37:53.053923  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:37:53.053982  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:37:53.079867  947325 cri.go:89] found id: ""
	I1213 10:37:53.079882  947325 logs.go:282] 0 containers: []
	W1213 10:37:53.079890  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:37:53.079897  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:37:53.079908  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:37:53.144913  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:37:53.144932  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:37:53.159844  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:37:53.159861  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:37:53.226427  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:37:53.218433   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:53.219033   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:53.220531   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:53.221108   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:53.222548   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:37:53.218433   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:53.219033   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:53.220531   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:53.221108   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:53.222548   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:37:53.226436  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:37:53.226447  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:37:53.294490  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:37:53.294510  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:37:55.827710  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:55.837950  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:37:55.838028  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:37:55.863239  947325 cri.go:89] found id: ""
	I1213 10:37:55.863253  947325 logs.go:282] 0 containers: []
	W1213 10:37:55.863260  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:37:55.863265  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:37:55.863331  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:37:55.892876  947325 cri.go:89] found id: ""
	I1213 10:37:55.892890  947325 logs.go:282] 0 containers: []
	W1213 10:37:55.892897  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:37:55.892902  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:37:55.892962  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:37:55.919038  947325 cri.go:89] found id: ""
	I1213 10:37:55.919051  947325 logs.go:282] 0 containers: []
	W1213 10:37:55.919059  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:37:55.919064  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:37:55.919123  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:37:55.944982  947325 cri.go:89] found id: ""
	I1213 10:37:55.944997  947325 logs.go:282] 0 containers: []
	W1213 10:37:55.945004  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:37:55.945009  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:37:55.945066  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:37:55.974750  947325 cri.go:89] found id: ""
	I1213 10:37:55.974764  947325 logs.go:282] 0 containers: []
	W1213 10:37:55.974771  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:37:55.974776  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:37:55.974836  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:37:56.006337  947325 cri.go:89] found id: ""
	I1213 10:37:56.006352  947325 logs.go:282] 0 containers: []
	W1213 10:37:56.006360  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:37:56.006365  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:37:56.006429  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:37:56.033183  947325 cri.go:89] found id: ""
	I1213 10:37:56.033199  947325 logs.go:282] 0 containers: []
	W1213 10:37:56.033206  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:37:56.033214  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:37:56.033225  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:37:56.098781  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:37:56.098801  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:37:56.113910  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:37:56.113933  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:37:56.179999  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:37:56.172125   12565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:56.172668   12565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:56.174227   12565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:56.174819   12565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:56.176271   12565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:37:56.172125   12565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:56.172668   12565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:56.174227   12565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:56.174819   12565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:56.176271   12565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:37:56.180009  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:37:56.180020  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:37:56.248249  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:37:56.248271  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:37:58.777669  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:58.788383  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:37:58.788443  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:37:58.815846  947325 cri.go:89] found id: ""
	I1213 10:37:58.815861  947325 logs.go:282] 0 containers: []
	W1213 10:37:58.815868  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:37:58.815873  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:37:58.815933  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:37:58.845912  947325 cri.go:89] found id: ""
	I1213 10:37:58.845926  947325 logs.go:282] 0 containers: []
	W1213 10:37:58.845933  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:37:58.845938  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:37:58.846003  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:37:58.870933  947325 cri.go:89] found id: ""
	I1213 10:37:58.870947  947325 logs.go:282] 0 containers: []
	W1213 10:37:58.870954  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:37:58.870959  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:37:58.871017  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:37:58.900972  947325 cri.go:89] found id: ""
	I1213 10:37:58.900986  947325 logs.go:282] 0 containers: []
	W1213 10:37:58.900993  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:37:58.900998  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:37:58.901054  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:37:58.926234  947325 cri.go:89] found id: ""
	I1213 10:37:58.926257  947325 logs.go:282] 0 containers: []
	W1213 10:37:58.926266  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:37:58.926271  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:37:58.926338  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:37:58.951314  947325 cri.go:89] found id: ""
	I1213 10:37:58.951328  947325 logs.go:282] 0 containers: []
	W1213 10:37:58.951335  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:37:58.951340  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:37:58.951398  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:37:58.981974  947325 cri.go:89] found id: ""
	I1213 10:37:58.981989  947325 logs.go:282] 0 containers: []
	W1213 10:37:58.981996  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:37:58.982003  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:37:58.982014  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:37:59.047152  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:37:59.047172  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:37:59.062001  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:37:59.062019  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:37:59.127736  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:37:59.119615   12667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:59.120166   12667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:59.121736   12667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:59.122383   12667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:59.123935   12667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:37:59.119615   12667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:59.120166   12667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:59.121736   12667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:59.122383   12667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:59.123935   12667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:37:59.127748  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:37:59.127759  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:37:59.196288  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:37:59.196308  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:38:01.726269  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:01.738227  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:38:01.738290  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:38:01.765402  947325 cri.go:89] found id: ""
	I1213 10:38:01.765416  947325 logs.go:282] 0 containers: []
	W1213 10:38:01.765423  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:38:01.765428  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:38:01.765487  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:38:01.797073  947325 cri.go:89] found id: ""
	I1213 10:38:01.797087  947325 logs.go:282] 0 containers: []
	W1213 10:38:01.797094  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:38:01.797105  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:38:01.797165  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:38:01.822923  947325 cri.go:89] found id: ""
	I1213 10:38:01.822936  947325 logs.go:282] 0 containers: []
	W1213 10:38:01.822943  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:38:01.822948  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:38:01.823004  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:38:01.847458  947325 cri.go:89] found id: ""
	I1213 10:38:01.847472  947325 logs.go:282] 0 containers: []
	W1213 10:38:01.847479  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:38:01.847484  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:38:01.847542  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:38:01.876363  947325 cri.go:89] found id: ""
	I1213 10:38:01.876376  947325 logs.go:282] 0 containers: []
	W1213 10:38:01.876383  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:38:01.876388  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:38:01.876445  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:38:01.901894  947325 cri.go:89] found id: ""
	I1213 10:38:01.901908  947325 logs.go:282] 0 containers: []
	W1213 10:38:01.901915  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:38:01.901920  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:38:01.901977  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:38:01.927538  947325 cri.go:89] found id: ""
	I1213 10:38:01.927556  947325 logs.go:282] 0 containers: []
	W1213 10:38:01.927563  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:38:01.927571  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:38:01.927585  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:38:01.993043  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:38:01.993063  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:38:02.009861  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:38:02.009878  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:38:02.079070  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:38:02.070348   12772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:02.071182   12772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:02.072918   12772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:02.073701   12772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:02.074834   12772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:38:02.070348   12772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:02.071182   12772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:02.072918   12772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:02.073701   12772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:02.074834   12772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:38:02.079087  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:38:02.079097  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:38:02.150335  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:38:02.150355  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:38:04.680156  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:04.690471  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:38:04.690534  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:38:04.717027  947325 cri.go:89] found id: ""
	I1213 10:38:04.717042  947325 logs.go:282] 0 containers: []
	W1213 10:38:04.717049  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:38:04.717055  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:38:04.717116  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:38:04.751100  947325 cri.go:89] found id: ""
	I1213 10:38:04.751114  947325 logs.go:282] 0 containers: []
	W1213 10:38:04.751121  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:38:04.751126  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:38:04.751185  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:38:04.785118  947325 cri.go:89] found id: ""
	I1213 10:38:04.785133  947325 logs.go:282] 0 containers: []
	W1213 10:38:04.785140  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:38:04.785145  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:38:04.785206  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:38:04.811838  947325 cri.go:89] found id: ""
	I1213 10:38:04.811852  947325 logs.go:282] 0 containers: []
	W1213 10:38:04.811859  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:38:04.811864  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:38:04.811924  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:38:04.837476  947325 cri.go:89] found id: ""
	I1213 10:38:04.837489  947325 logs.go:282] 0 containers: []
	W1213 10:38:04.837497  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:38:04.837502  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:38:04.837589  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:38:04.863616  947325 cri.go:89] found id: ""
	I1213 10:38:04.863630  947325 logs.go:282] 0 containers: []
	W1213 10:38:04.863637  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:38:04.863642  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:38:04.864028  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:38:04.897282  947325 cri.go:89] found id: ""
	I1213 10:38:04.897297  947325 logs.go:282] 0 containers: []
	W1213 10:38:04.897304  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:38:04.897311  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:38:04.897322  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:38:04.970089  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:38:04.970112  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:38:04.998787  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:38:04.998808  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:38:05.071114  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:38:05.071136  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:38:05.086764  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:38:05.086780  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:38:05.152705  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:38:05.144665   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:05.145255   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:05.146845   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:05.147330   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:05.148849   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:38:05.144665   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:05.145255   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:05.146845   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:05.147330   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:05.148849   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:38:07.652961  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:07.663190  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:38:07.663256  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:38:07.687596  947325 cri.go:89] found id: ""
	I1213 10:38:07.687611  947325 logs.go:282] 0 containers: []
	W1213 10:38:07.687619  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:38:07.687624  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:38:07.687682  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:38:07.712358  947325 cri.go:89] found id: ""
	I1213 10:38:07.712372  947325 logs.go:282] 0 containers: []
	W1213 10:38:07.712379  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:38:07.712384  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:38:07.712443  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:38:07.747606  947325 cri.go:89] found id: ""
	I1213 10:38:07.747620  947325 logs.go:282] 0 containers: []
	W1213 10:38:07.747627  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:38:07.747632  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:38:07.747686  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:38:07.779928  947325 cri.go:89] found id: ""
	I1213 10:38:07.779942  947325 logs.go:282] 0 containers: []
	W1213 10:38:07.779949  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:38:07.779954  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:38:07.780010  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:38:07.809892  947325 cri.go:89] found id: ""
	I1213 10:38:07.809905  947325 logs.go:282] 0 containers: []
	W1213 10:38:07.809912  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:38:07.809917  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:38:07.809976  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:38:07.835954  947325 cri.go:89] found id: ""
	I1213 10:38:07.835969  947325 logs.go:282] 0 containers: []
	W1213 10:38:07.835977  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:38:07.835983  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:38:07.836045  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:38:07.863613  947325 cri.go:89] found id: ""
	I1213 10:38:07.863628  947325 logs.go:282] 0 containers: []
	W1213 10:38:07.863635  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:38:07.863643  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:38:07.863653  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:38:07.934015  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:38:07.934035  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:38:07.949065  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:38:07.949082  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:38:08.016099  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:38:08.006616   12984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:08.007565   12984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:08.009216   12984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:08.009606   12984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:08.011135   12984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:38:08.006616   12984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:08.007565   12984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:08.009216   12984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:08.009606   12984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:08.011135   12984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:38:08.016110  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:38:08.016120  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:38:08.086624  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:38:08.086643  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:38:10.620779  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:10.631455  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:38:10.631519  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:38:10.657004  947325 cri.go:89] found id: ""
	I1213 10:38:10.657018  947325 logs.go:282] 0 containers: []
	W1213 10:38:10.657025  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:38:10.657031  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:38:10.657091  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:38:10.682863  947325 cri.go:89] found id: ""
	I1213 10:38:10.682879  947325 logs.go:282] 0 containers: []
	W1213 10:38:10.682887  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:38:10.682892  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:38:10.682952  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:38:10.710656  947325 cri.go:89] found id: ""
	I1213 10:38:10.710671  947325 logs.go:282] 0 containers: []
	W1213 10:38:10.710678  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:38:10.710684  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:38:10.710744  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:38:10.751941  947325 cri.go:89] found id: ""
	I1213 10:38:10.751955  947325 logs.go:282] 0 containers: []
	W1213 10:38:10.751962  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:38:10.751967  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:38:10.752027  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:38:10.784379  947325 cri.go:89] found id: ""
	I1213 10:38:10.784393  947325 logs.go:282] 0 containers: []
	W1213 10:38:10.784400  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:38:10.784405  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:38:10.784462  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:38:10.812194  947325 cri.go:89] found id: ""
	I1213 10:38:10.812208  947325 logs.go:282] 0 containers: []
	W1213 10:38:10.812215  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:38:10.812220  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:38:10.812279  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:38:10.837693  947325 cri.go:89] found id: ""
	I1213 10:38:10.837706  947325 logs.go:282] 0 containers: []
	W1213 10:38:10.837714  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:38:10.837721  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:38:10.837732  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:38:10.903946  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:38:10.903965  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:38:10.918956  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:38:10.918972  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:38:10.991627  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:38:10.983406   13089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:10.984077   13089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:10.985359   13089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:10.985915   13089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:10.987398   13089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:38:10.983406   13089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:10.984077   13089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:10.985359   13089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:10.985915   13089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:10.987398   13089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:38:10.991638  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:38:10.991648  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:38:11.064139  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:38:11.064160  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:38:13.600555  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:13.610666  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:38:13.610728  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:38:13.635608  947325 cri.go:89] found id: ""
	I1213 10:38:13.635622  947325 logs.go:282] 0 containers: []
	W1213 10:38:13.635629  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:38:13.635635  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:38:13.635694  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:38:13.660494  947325 cri.go:89] found id: ""
	I1213 10:38:13.660509  947325 logs.go:282] 0 containers: []
	W1213 10:38:13.660516  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:38:13.660521  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:38:13.660580  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:38:13.686792  947325 cri.go:89] found id: ""
	I1213 10:38:13.686807  947325 logs.go:282] 0 containers: []
	W1213 10:38:13.686814  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:38:13.686820  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:38:13.686877  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:38:13.712337  947325 cri.go:89] found id: ""
	I1213 10:38:13.712351  947325 logs.go:282] 0 containers: []
	W1213 10:38:13.712358  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:38:13.712364  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:38:13.712421  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:38:13.751688  947325 cri.go:89] found id: ""
	I1213 10:38:13.751703  947325 logs.go:282] 0 containers: []
	W1213 10:38:13.751710  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:38:13.751716  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:38:13.751771  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:38:13.778873  947325 cri.go:89] found id: ""
	I1213 10:38:13.778886  947325 logs.go:282] 0 containers: []
	W1213 10:38:13.778893  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:38:13.778898  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:38:13.778955  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:38:13.808036  947325 cri.go:89] found id: ""
	I1213 10:38:13.808050  947325 logs.go:282] 0 containers: []
	W1213 10:38:13.808057  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:38:13.808065  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:38:13.808081  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:38:13.874152  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:38:13.864618   13192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:13.865871   13192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:13.866606   13192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:13.868278   13192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:13.868976   13192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:38:13.864618   13192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:13.865871   13192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:13.866606   13192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:13.868278   13192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:13.868976   13192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:38:13.874162  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:38:13.874173  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:38:13.943404  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:38:13.943424  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:38:13.971540  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:38:13.971557  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:38:14.040558  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:38:14.040581  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:38:16.556175  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:16.566366  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:38:16.566428  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:38:16.591757  947325 cri.go:89] found id: ""
	I1213 10:38:16.591772  947325 logs.go:282] 0 containers: []
	W1213 10:38:16.591779  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:38:16.591785  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:38:16.591842  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:38:16.617244  947325 cri.go:89] found id: ""
	I1213 10:38:16.617259  947325 logs.go:282] 0 containers: []
	W1213 10:38:16.617266  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:38:16.617271  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:38:16.617329  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:38:16.644168  947325 cri.go:89] found id: ""
	I1213 10:38:16.644182  947325 logs.go:282] 0 containers: []
	W1213 10:38:16.644189  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:38:16.644194  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:38:16.644253  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:38:16.673646  947325 cri.go:89] found id: ""
	I1213 10:38:16.673659  947325 logs.go:282] 0 containers: []
	W1213 10:38:16.673666  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:38:16.673671  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:38:16.673729  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:38:16.698771  947325 cri.go:89] found id: ""
	I1213 10:38:16.698785  947325 logs.go:282] 0 containers: []
	W1213 10:38:16.698793  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:38:16.698798  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:38:16.698857  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:38:16.726980  947325 cri.go:89] found id: ""
	I1213 10:38:16.726994  947325 logs.go:282] 0 containers: []
	W1213 10:38:16.727001  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:38:16.727006  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:38:16.727066  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:38:16.773642  947325 cri.go:89] found id: ""
	I1213 10:38:16.773657  947325 logs.go:282] 0 containers: []
	W1213 10:38:16.773665  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:38:16.773673  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:38:16.773685  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:38:16.807643  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:38:16.807660  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:38:16.874674  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:38:16.874698  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:38:16.890281  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:38:16.890299  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:38:16.958318  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:38:16.949056   13311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:16.950510   13311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:16.951914   13311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:16.952759   13311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:16.954416   13311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:38:16.949056   13311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:16.950510   13311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:16.951914   13311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:16.952759   13311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:16.954416   13311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:38:16.958330  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:38:16.958343  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:38:19.528319  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:19.539728  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:38:19.539789  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:38:19.571108  947325 cri.go:89] found id: ""
	I1213 10:38:19.571121  947325 logs.go:282] 0 containers: []
	W1213 10:38:19.571129  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:38:19.571134  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:38:19.571194  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:38:19.597765  947325 cri.go:89] found id: ""
	I1213 10:38:19.597779  947325 logs.go:282] 0 containers: []
	W1213 10:38:19.597787  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:38:19.597792  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:38:19.597853  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:38:19.623110  947325 cri.go:89] found id: ""
	I1213 10:38:19.623124  947325 logs.go:282] 0 containers: []
	W1213 10:38:19.623137  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:38:19.623142  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:38:19.623204  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:38:19.648553  947325 cri.go:89] found id: ""
	I1213 10:38:19.648568  947325 logs.go:282] 0 containers: []
	W1213 10:38:19.648575  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:38:19.648580  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:38:19.648652  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:38:19.674550  947325 cri.go:89] found id: ""
	I1213 10:38:19.674565  947325 logs.go:282] 0 containers: []
	W1213 10:38:19.674572  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:38:19.674577  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:38:19.674635  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:38:19.704458  947325 cri.go:89] found id: ""
	I1213 10:38:19.704473  947325 logs.go:282] 0 containers: []
	W1213 10:38:19.704480  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:38:19.704486  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:38:19.704560  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:38:19.742545  947325 cri.go:89] found id: ""
	I1213 10:38:19.742559  947325 logs.go:282] 0 containers: []
	W1213 10:38:19.742566  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:38:19.742573  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:38:19.742584  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:38:19.818214  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:38:19.818236  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:38:19.833741  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:38:19.833757  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:38:19.899700  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:38:19.891381   13404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:19.892146   13404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:19.893293   13404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:19.893921   13404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:19.895742   13404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:38:19.891381   13404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:19.892146   13404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:19.893293   13404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:19.893921   13404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:19.895742   13404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:38:19.899710  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:38:19.899731  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:38:19.969264  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:38:19.969284  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:38:22.501918  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:22.513303  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:38:22.513368  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:38:22.542006  947325 cri.go:89] found id: ""
	I1213 10:38:22.542020  947325 logs.go:282] 0 containers: []
	W1213 10:38:22.542028  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:38:22.542033  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:38:22.542109  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:38:22.572046  947325 cri.go:89] found id: ""
	I1213 10:38:22.572061  947325 logs.go:282] 0 containers: []
	W1213 10:38:22.572068  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:38:22.572073  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:38:22.572131  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:38:22.599640  947325 cri.go:89] found id: ""
	I1213 10:38:22.599654  947325 logs.go:282] 0 containers: []
	W1213 10:38:22.599660  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:38:22.599665  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:38:22.599728  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:38:22.628632  947325 cri.go:89] found id: ""
	I1213 10:38:22.628646  947325 logs.go:282] 0 containers: []
	W1213 10:38:22.628653  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:38:22.628658  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:38:22.628717  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:38:22.655032  947325 cri.go:89] found id: ""
	I1213 10:38:22.655046  947325 logs.go:282] 0 containers: []
	W1213 10:38:22.655053  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:38:22.655058  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:38:22.655119  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:38:22.682403  947325 cri.go:89] found id: ""
	I1213 10:38:22.682422  947325 logs.go:282] 0 containers: []
	W1213 10:38:22.682431  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:38:22.682436  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:38:22.682511  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:38:22.709263  947325 cri.go:89] found id: ""
	I1213 10:38:22.709277  947325 logs.go:282] 0 containers: []
	W1213 10:38:22.709286  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:38:22.709293  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:38:22.709307  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:38:22.748554  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:38:22.748573  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:38:22.820355  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:38:22.820376  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:38:22.836069  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:38:22.836100  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:38:22.902594  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:38:22.894546   13522 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:22.895165   13522 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:22.896717   13522 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:22.897250   13522 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:22.898679   13522 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:38:22.894546   13522 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:22.895165   13522 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:22.896717   13522 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:22.897250   13522 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:22.898679   13522 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:38:22.902605  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:38:22.902616  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:38:25.474313  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:25.484536  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:38:25.484600  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:38:25.512648  947325 cri.go:89] found id: ""
	I1213 10:38:25.512662  947325 logs.go:282] 0 containers: []
	W1213 10:38:25.512670  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:38:25.512675  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:38:25.512736  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:38:25.545720  947325 cri.go:89] found id: ""
	I1213 10:38:25.545739  947325 logs.go:282] 0 containers: []
	W1213 10:38:25.545746  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:38:25.545752  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:38:25.545821  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:38:25.572807  947325 cri.go:89] found id: ""
	I1213 10:38:25.572820  947325 logs.go:282] 0 containers: []
	W1213 10:38:25.572827  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:38:25.572832  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:38:25.572890  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:38:25.597850  947325 cri.go:89] found id: ""
	I1213 10:38:25.597864  947325 logs.go:282] 0 containers: []
	W1213 10:38:25.597871  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:38:25.597876  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:38:25.597939  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:38:25.622944  947325 cri.go:89] found id: ""
	I1213 10:38:25.622958  947325 logs.go:282] 0 containers: []
	W1213 10:38:25.622965  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:38:25.622971  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:38:25.623030  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:38:25.647255  947325 cri.go:89] found id: ""
	I1213 10:38:25.647268  947325 logs.go:282] 0 containers: []
	W1213 10:38:25.647276  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:38:25.647281  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:38:25.647339  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:38:25.672821  947325 cri.go:89] found id: ""
	I1213 10:38:25.672837  947325 logs.go:282] 0 containers: []
	W1213 10:38:25.672844  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:38:25.672864  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:38:25.672875  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:38:25.744377  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:38:25.744397  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:38:25.773682  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:38:25.773699  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:38:25.843372  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:38:25.843396  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:38:25.858420  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:38:25.858437  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:38:25.923733  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:38:25.915727   13633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:25.916379   13633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:25.917934   13633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:25.918499   13633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:25.919915   13633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:38:25.915727   13633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:25.916379   13633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:25.917934   13633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:25.918499   13633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:25.919915   13633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:38:28.424008  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:28.434425  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:38:28.434490  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:38:28.459479  947325 cri.go:89] found id: ""
	I1213 10:38:28.459493  947325 logs.go:282] 0 containers: []
	W1213 10:38:28.459501  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:38:28.459506  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:38:28.459569  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:38:28.488343  947325 cri.go:89] found id: ""
	I1213 10:38:28.488357  947325 logs.go:282] 0 containers: []
	W1213 10:38:28.488365  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:38:28.488370  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:38:28.488431  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:38:28.513634  947325 cri.go:89] found id: ""
	I1213 10:38:28.513649  947325 logs.go:282] 0 containers: []
	W1213 10:38:28.513656  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:38:28.513661  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:38:28.513719  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:38:28.540169  947325 cri.go:89] found id: ""
	I1213 10:38:28.540182  947325 logs.go:282] 0 containers: []
	W1213 10:38:28.540190  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:38:28.540195  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:38:28.540253  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:38:28.564331  947325 cri.go:89] found id: ""
	I1213 10:38:28.564344  947325 logs.go:282] 0 containers: []
	W1213 10:38:28.564351  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:38:28.564356  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:38:28.564415  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:38:28.592829  947325 cri.go:89] found id: ""
	I1213 10:38:28.592844  947325 logs.go:282] 0 containers: []
	W1213 10:38:28.592851  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:38:28.592856  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:38:28.592913  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:38:28.618020  947325 cri.go:89] found id: ""
	I1213 10:38:28.618035  947325 logs.go:282] 0 containers: []
	W1213 10:38:28.618044  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:38:28.618052  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:38:28.618063  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:38:28.685306  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:38:28.685326  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:38:28.713761  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:38:28.713779  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:38:28.794463  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:38:28.794484  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:38:28.809677  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:38:28.809696  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:38:28.870924  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:38:28.863257   13740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:28.863803   13740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:28.864955   13740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:28.865616   13740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:28.867097   13740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:38:28.863257   13740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:28.863803   13740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:28.864955   13740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:28.865616   13740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:28.867097   13740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:38:31.371199  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:31.381501  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:38:31.381583  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:38:31.408362  947325 cri.go:89] found id: ""
	I1213 10:38:31.408376  947325 logs.go:282] 0 containers: []
	W1213 10:38:31.408383  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:38:31.408388  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:38:31.408454  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:38:31.434743  947325 cri.go:89] found id: ""
	I1213 10:38:31.434758  947325 logs.go:282] 0 containers: []
	W1213 10:38:31.434766  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:38:31.434772  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:38:31.434831  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:38:31.467710  947325 cri.go:89] found id: ""
	I1213 10:38:31.467724  947325 logs.go:282] 0 containers: []
	W1213 10:38:31.467731  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:38:31.467736  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:38:31.467795  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:38:31.493177  947325 cri.go:89] found id: ""
	I1213 10:38:31.493191  947325 logs.go:282] 0 containers: []
	W1213 10:38:31.493198  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:38:31.493203  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:38:31.493263  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:38:31.517966  947325 cri.go:89] found id: ""
	I1213 10:38:31.517980  947325 logs.go:282] 0 containers: []
	W1213 10:38:31.517987  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:38:31.517992  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:38:31.518057  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:38:31.542186  947325 cri.go:89] found id: ""
	I1213 10:38:31.542201  947325 logs.go:282] 0 containers: []
	W1213 10:38:31.542208  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:38:31.542213  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:38:31.542270  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:38:31.567569  947325 cri.go:89] found id: ""
	I1213 10:38:31.567583  947325 logs.go:282] 0 containers: []
	W1213 10:38:31.567590  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:38:31.567598  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:38:31.567609  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:38:31.633128  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:38:31.633147  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:38:31.647898  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:38:31.647916  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:38:31.713585  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:38:31.704990   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:31.706015   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:31.707614   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:31.708200   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:31.709708   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:38:31.704990   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:31.706015   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:31.707614   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:31.708200   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:31.709708   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:38:31.713595  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:38:31.713606  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:38:31.784338  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:38:31.784357  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:38:34.315454  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:34.327061  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:38:34.327130  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:38:34.356795  947325 cri.go:89] found id: ""
	I1213 10:38:34.356809  947325 logs.go:282] 0 containers: []
	W1213 10:38:34.356817  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:38:34.356822  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:38:34.356892  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:38:34.384789  947325 cri.go:89] found id: ""
	I1213 10:38:34.384804  947325 logs.go:282] 0 containers: []
	W1213 10:38:34.384812  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:38:34.384817  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:38:34.384907  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:38:34.410778  947325 cri.go:89] found id: ""
	I1213 10:38:34.410791  947325 logs.go:282] 0 containers: []
	W1213 10:38:34.410799  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:38:34.410804  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:38:34.410861  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:38:34.440426  947325 cri.go:89] found id: ""
	I1213 10:38:34.440440  947325 logs.go:282] 0 containers: []
	W1213 10:38:34.440454  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:38:34.440459  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:38:34.440514  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:38:34.465148  947325 cri.go:89] found id: ""
	I1213 10:38:34.465162  947325 logs.go:282] 0 containers: []
	W1213 10:38:34.465170  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:38:34.465175  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:38:34.465236  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:38:34.491230  947325 cri.go:89] found id: ""
	I1213 10:38:34.491245  947325 logs.go:282] 0 containers: []
	W1213 10:38:34.491253  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:38:34.491259  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:38:34.491364  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:38:34.520190  947325 cri.go:89] found id: ""
	I1213 10:38:34.520205  947325 logs.go:282] 0 containers: []
	W1213 10:38:34.520213  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:38:34.520220  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:38:34.520235  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:38:34.552635  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:38:34.552652  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:38:34.617894  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:38:34.617914  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:38:34.632507  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:38:34.632528  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:38:34.697693  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:38:34.688967   13939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:34.689672   13939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:34.691242   13939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:34.691552   13939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:34.693083   13939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:38:34.688967   13939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:34.689672   13939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:34.691242   13939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:34.691552   13939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:34.693083   13939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:38:34.697704  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:38:34.697715  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:38:37.276776  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:37.287236  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:38:37.287306  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:38:37.314091  947325 cri.go:89] found id: ""
	I1213 10:38:37.314105  947325 logs.go:282] 0 containers: []
	W1213 10:38:37.314112  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:38:37.314118  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:38:37.314180  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:38:37.343079  947325 cri.go:89] found id: ""
	I1213 10:38:37.343092  947325 logs.go:282] 0 containers: []
	W1213 10:38:37.343099  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:38:37.343104  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:38:37.343162  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:38:37.371406  947325 cri.go:89] found id: ""
	I1213 10:38:37.371420  947325 logs.go:282] 0 containers: []
	W1213 10:38:37.371428  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:38:37.371432  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:38:37.371489  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:38:37.400383  947325 cri.go:89] found id: ""
	I1213 10:38:37.400398  947325 logs.go:282] 0 containers: []
	W1213 10:38:37.400405  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:38:37.400415  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:38:37.400473  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:38:37.432217  947325 cri.go:89] found id: ""
	I1213 10:38:37.432232  947325 logs.go:282] 0 containers: []
	W1213 10:38:37.432240  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:38:37.432245  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:38:37.432306  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:38:37.459687  947325 cri.go:89] found id: ""
	I1213 10:38:37.459701  947325 logs.go:282] 0 containers: []
	W1213 10:38:37.459708  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:38:37.459713  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:38:37.459771  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:38:37.491295  947325 cri.go:89] found id: ""
	I1213 10:38:37.491309  947325 logs.go:282] 0 containers: []
	W1213 10:38:37.491316  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:38:37.491324  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:38:37.491335  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:38:37.569044  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:38:37.569068  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:38:37.598399  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:38:37.598416  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:38:37.669854  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:38:37.669873  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:38:37.685001  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:38:37.685024  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:38:37.764039  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:38:37.754418   14047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:37.755520   14047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:37.757588   14047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:37.758501   14047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:37.759525   14047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:38:37.754418   14047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:37.755520   14047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:37.757588   14047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:37.758501   14047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:37.759525   14047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:38:40.265130  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:40.276597  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:38:40.276660  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:38:40.301799  947325 cri.go:89] found id: ""
	I1213 10:38:40.301815  947325 logs.go:282] 0 containers: []
	W1213 10:38:40.301822  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:38:40.301828  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:38:40.301884  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:38:40.328096  947325 cri.go:89] found id: ""
	I1213 10:38:40.328110  947325 logs.go:282] 0 containers: []
	W1213 10:38:40.328117  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:38:40.328122  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:38:40.328180  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:38:40.352505  947325 cri.go:89] found id: ""
	I1213 10:38:40.352520  947325 logs.go:282] 0 containers: []
	W1213 10:38:40.352527  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:38:40.352532  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:38:40.352592  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:38:40.381218  947325 cri.go:89] found id: ""
	I1213 10:38:40.381233  947325 logs.go:282] 0 containers: []
	W1213 10:38:40.381240  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:38:40.381245  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:38:40.381303  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:38:40.406747  947325 cri.go:89] found id: ""
	I1213 10:38:40.406761  947325 logs.go:282] 0 containers: []
	W1213 10:38:40.406769  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:38:40.406774  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:38:40.406836  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:38:40.432179  947325 cri.go:89] found id: ""
	I1213 10:38:40.432193  947325 logs.go:282] 0 containers: []
	W1213 10:38:40.432200  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:38:40.432230  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:38:40.432294  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:38:40.457241  947325 cri.go:89] found id: ""
	I1213 10:38:40.457256  947325 logs.go:282] 0 containers: []
	W1213 10:38:40.457263  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:38:40.457270  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:38:40.457281  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:38:40.485384  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:38:40.485400  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:38:40.553931  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:38:40.553950  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:38:40.568552  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:38:40.568568  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:38:40.631691  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:38:40.623997   14152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:40.624643   14152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:40.626097   14152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:40.626582   14152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:40.628021   14152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:38:40.623997   14152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:40.624643   14152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:40.626097   14152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:40.626582   14152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:40.628021   14152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:38:40.631701  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:38:40.631711  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:38:43.202405  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:43.212618  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:38:43.212681  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:38:43.237960  947325 cri.go:89] found id: ""
	I1213 10:38:43.237975  947325 logs.go:282] 0 containers: []
	W1213 10:38:43.237981  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:38:43.237986  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:38:43.238046  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:38:43.262400  947325 cri.go:89] found id: ""
	I1213 10:38:43.262415  947325 logs.go:282] 0 containers: []
	W1213 10:38:43.262422  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:38:43.262427  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:38:43.262485  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:38:43.287113  947325 cri.go:89] found id: ""
	I1213 10:38:43.287126  947325 logs.go:282] 0 containers: []
	W1213 10:38:43.287133  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:38:43.287138  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:38:43.287194  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:38:43.311437  947325 cri.go:89] found id: ""
	I1213 10:38:43.311451  947325 logs.go:282] 0 containers: []
	W1213 10:38:43.311459  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:38:43.311464  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:38:43.311520  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:38:43.338038  947325 cri.go:89] found id: ""
	I1213 10:38:43.338052  947325 logs.go:282] 0 containers: []
	W1213 10:38:43.338059  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:38:43.338066  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:38:43.338125  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:38:43.363248  947325 cri.go:89] found id: ""
	I1213 10:38:43.363262  947325 logs.go:282] 0 containers: []
	W1213 10:38:43.363269  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:38:43.363274  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:38:43.363331  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:38:43.388331  947325 cri.go:89] found id: ""
	I1213 10:38:43.388346  947325 logs.go:282] 0 containers: []
	W1213 10:38:43.388353  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:38:43.388361  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:38:43.388371  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:38:43.456040  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:38:43.448208   14240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:43.448885   14240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:43.450561   14240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:43.451211   14240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:43.452293   14240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:38:43.448208   14240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:43.448885   14240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:43.450561   14240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:43.451211   14240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:43.452293   14240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:38:43.456051  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:38:43.456062  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:38:43.529676  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:38:43.529697  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:38:43.557667  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:38:43.557683  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:38:43.626256  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:38:43.626276  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:38:46.141151  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:46.151629  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:38:46.151691  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:38:46.177078  947325 cri.go:89] found id: ""
	I1213 10:38:46.177092  947325 logs.go:282] 0 containers: []
	W1213 10:38:46.177099  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:38:46.177104  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:38:46.177163  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:38:46.203681  947325 cri.go:89] found id: ""
	I1213 10:38:46.203695  947325 logs.go:282] 0 containers: []
	W1213 10:38:46.203702  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:38:46.203707  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:38:46.203765  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:38:46.228801  947325 cri.go:89] found id: ""
	I1213 10:38:46.228815  947325 logs.go:282] 0 containers: []
	W1213 10:38:46.228823  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:38:46.228828  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:38:46.228892  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:38:46.254742  947325 cri.go:89] found id: ""
	I1213 10:38:46.254756  947325 logs.go:282] 0 containers: []
	W1213 10:38:46.254763  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:38:46.254768  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:38:46.254825  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:38:46.286504  947325 cri.go:89] found id: ""
	I1213 10:38:46.286522  947325 logs.go:282] 0 containers: []
	W1213 10:38:46.286529  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:38:46.286534  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:38:46.286596  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:38:46.311507  947325 cri.go:89] found id: ""
	I1213 10:38:46.311523  947325 logs.go:282] 0 containers: []
	W1213 10:38:46.311531  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:38:46.311536  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:38:46.311599  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:38:46.340455  947325 cri.go:89] found id: ""
	I1213 10:38:46.340469  947325 logs.go:282] 0 containers: []
	W1213 10:38:46.340477  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:38:46.340496  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:38:46.340508  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:38:46.410798  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:38:46.410817  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:38:46.425740  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:38:46.425758  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:38:46.488528  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:38:46.479589   14352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:46.480382   14352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:46.482285   14352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:46.482891   14352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:46.484595   14352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:38:46.479589   14352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:46.480382   14352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:46.482285   14352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:46.482891   14352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:46.484595   14352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:38:46.488537  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:38:46.488549  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:38:46.558649  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:38:46.558668  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:38:49.089125  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:49.099199  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:38:49.099261  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:38:49.128242  947325 cri.go:89] found id: ""
	I1213 10:38:49.128256  947325 logs.go:282] 0 containers: []
	W1213 10:38:49.128263  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:38:49.128268  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:38:49.128328  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:38:49.154103  947325 cri.go:89] found id: ""
	I1213 10:38:49.154117  947325 logs.go:282] 0 containers: []
	W1213 10:38:49.154124  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:38:49.154129  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:38:49.154189  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:38:49.178738  947325 cri.go:89] found id: ""
	I1213 10:38:49.178754  947325 logs.go:282] 0 containers: []
	W1213 10:38:49.178762  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:38:49.178767  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:38:49.178824  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:38:49.203209  947325 cri.go:89] found id: ""
	I1213 10:38:49.203223  947325 logs.go:282] 0 containers: []
	W1213 10:38:49.203230  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:38:49.203235  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:38:49.203290  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:38:49.228158  947325 cri.go:89] found id: ""
	I1213 10:38:49.228174  947325 logs.go:282] 0 containers: []
	W1213 10:38:49.228181  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:38:49.228186  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:38:49.228245  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:38:49.257410  947325 cri.go:89] found id: ""
	I1213 10:38:49.257425  947325 logs.go:282] 0 containers: []
	W1213 10:38:49.257432  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:38:49.257437  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:38:49.257503  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:38:49.284405  947325 cri.go:89] found id: ""
	I1213 10:38:49.284419  947325 logs.go:282] 0 containers: []
	W1213 10:38:49.284428  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:38:49.284436  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:38:49.284447  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:38:49.350814  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:38:49.350834  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:38:49.365897  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:38:49.365914  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:38:49.428434  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:38:49.419689   14458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:49.420411   14458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:49.422194   14458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:49.422785   14458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:49.424440   14458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:38:49.419689   14458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:49.420411   14458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:49.422194   14458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:49.422785   14458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:49.424440   14458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:38:49.428445  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:38:49.428455  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:38:49.497319  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:38:49.497338  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:38:52.026790  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:52.037493  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:38:52.037629  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:38:52.065928  947325 cri.go:89] found id: ""
	I1213 10:38:52.065942  947325 logs.go:282] 0 containers: []
	W1213 10:38:52.065959  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:38:52.065966  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:38:52.066030  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:38:52.093348  947325 cri.go:89] found id: ""
	I1213 10:38:52.093377  947325 logs.go:282] 0 containers: []
	W1213 10:38:52.093385  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:38:52.093391  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:38:52.093461  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:38:52.120408  947325 cri.go:89] found id: ""
	I1213 10:38:52.120438  947325 logs.go:282] 0 containers: []
	W1213 10:38:52.120446  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:38:52.120451  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:38:52.120520  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:38:52.151619  947325 cri.go:89] found id: ""
	I1213 10:38:52.151633  947325 logs.go:282] 0 containers: []
	W1213 10:38:52.151640  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:38:52.151645  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:38:52.151709  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:38:52.181293  947325 cri.go:89] found id: ""
	I1213 10:38:52.181307  947325 logs.go:282] 0 containers: []
	W1213 10:38:52.181314  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:38:52.181319  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:38:52.181381  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:38:52.207056  947325 cri.go:89] found id: ""
	I1213 10:38:52.207073  947325 logs.go:282] 0 containers: []
	W1213 10:38:52.207080  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:38:52.207085  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:38:52.207144  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:38:52.232482  947325 cri.go:89] found id: ""
	I1213 10:38:52.232495  947325 logs.go:282] 0 containers: []
	W1213 10:38:52.232503  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:38:52.232511  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:38:52.232523  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:38:52.298884  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:38:52.298908  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:38:52.314165  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:38:52.314184  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:38:52.379432  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:38:52.370728   14563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:52.371164   14563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:52.372944   14563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:52.373396   14563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:52.375062   14563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:38:52.370728   14563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:52.371164   14563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:52.372944   14563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:52.373396   14563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:52.375062   14563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:38:52.379442  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:38:52.379454  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:38:52.447720  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:38:52.447739  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:38:54.981781  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:54.994265  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:38:54.994331  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:38:55.034512  947325 cri.go:89] found id: ""
	I1213 10:38:55.034527  947325 logs.go:282] 0 containers: []
	W1213 10:38:55.034535  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:38:55.034541  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:38:55.034603  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:38:55.064371  947325 cri.go:89] found id: ""
	I1213 10:38:55.064385  947325 logs.go:282] 0 containers: []
	W1213 10:38:55.064393  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:38:55.064399  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:38:55.064464  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:38:55.094614  947325 cri.go:89] found id: ""
	I1213 10:38:55.094628  947325 logs.go:282] 0 containers: []
	W1213 10:38:55.094635  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:38:55.094640  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:38:55.094703  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:38:55.122445  947325 cri.go:89] found id: ""
	I1213 10:38:55.122469  947325 logs.go:282] 0 containers: []
	W1213 10:38:55.122476  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:38:55.122482  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:38:55.122565  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:38:55.149483  947325 cri.go:89] found id: ""
	I1213 10:38:55.149497  947325 logs.go:282] 0 containers: []
	W1213 10:38:55.149505  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:38:55.149510  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:38:55.149608  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:38:55.177190  947325 cri.go:89] found id: ""
	I1213 10:38:55.177204  947325 logs.go:282] 0 containers: []
	W1213 10:38:55.177211  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:38:55.177216  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:38:55.177276  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:38:55.205792  947325 cri.go:89] found id: ""
	I1213 10:38:55.205805  947325 logs.go:282] 0 containers: []
	W1213 10:38:55.205813  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:38:55.205820  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:38:55.205831  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:38:55.274521  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:38:55.274543  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:38:55.303850  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:38:55.303867  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:38:55.372053  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:38:55.372072  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:38:55.386741  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:38:55.386757  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:38:55.453760  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:38:55.443866   14678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:55.444485   14678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:55.446205   14678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:55.448348   14678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:55.448876   14678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:38:55.443866   14678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:55.444485   14678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:55.446205   14678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:55.448348   14678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:55.448876   14678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:38:57.954020  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:57.964050  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:38:57.964109  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:38:57.998468  947325 cri.go:89] found id: ""
	I1213 10:38:57.998484  947325 logs.go:282] 0 containers: []
	W1213 10:38:57.998492  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:38:57.998497  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:38:57.998564  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:38:58.035565  947325 cri.go:89] found id: ""
	I1213 10:38:58.035580  947325 logs.go:282] 0 containers: []
	W1213 10:38:58.035587  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:38:58.035592  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:38:58.035654  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:38:58.066882  947325 cri.go:89] found id: ""
	I1213 10:38:58.066903  947325 logs.go:282] 0 containers: []
	W1213 10:38:58.066912  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:38:58.066917  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:38:58.066978  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:38:58.092977  947325 cri.go:89] found id: ""
	I1213 10:38:58.093007  947325 logs.go:282] 0 containers: []
	W1213 10:38:58.093014  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:38:58.093019  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:38:58.093088  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:38:58.123222  947325 cri.go:89] found id: ""
	I1213 10:38:58.123235  947325 logs.go:282] 0 containers: []
	W1213 10:38:58.123243  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:38:58.123248  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:38:58.123311  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:38:58.148191  947325 cri.go:89] found id: ""
	I1213 10:38:58.148204  947325 logs.go:282] 0 containers: []
	W1213 10:38:58.148211  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:38:58.148226  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:38:58.148283  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:38:58.174245  947325 cri.go:89] found id: ""
	I1213 10:38:58.174259  947325 logs.go:282] 0 containers: []
	W1213 10:38:58.174266  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:38:58.174274  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:38:58.174286  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:38:58.238353  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:38:58.230226   14764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:58.230884   14764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:58.232487   14764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:58.232939   14764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:58.234404   14764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:38:58.230226   14764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:58.230884   14764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:58.232487   14764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:58.232939   14764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:58.234404   14764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:38:58.238363  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:38:58.238374  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:38:58.310390  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:38:58.310414  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:38:58.339218  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:38:58.339235  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:38:58.411033  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:38:58.411053  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:00.926322  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:00.937217  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:00.937279  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:00.963631  947325 cri.go:89] found id: ""
	I1213 10:39:00.963645  947325 logs.go:282] 0 containers: []
	W1213 10:39:00.963653  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:00.963658  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:39:00.963720  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:00.992312  947325 cri.go:89] found id: ""
	I1213 10:39:00.992327  947325 logs.go:282] 0 containers: []
	W1213 10:39:00.992334  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:39:00.992340  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:39:00.992402  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:01.019653  947325 cri.go:89] found id: ""
	I1213 10:39:01.019667  947325 logs.go:282] 0 containers: []
	W1213 10:39:01.019674  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:39:01.019679  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:01.019737  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:01.048197  947325 cri.go:89] found id: ""
	I1213 10:39:01.048211  947325 logs.go:282] 0 containers: []
	W1213 10:39:01.048218  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:01.048224  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:01.048278  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:01.077274  947325 cri.go:89] found id: ""
	I1213 10:39:01.077288  947325 logs.go:282] 0 containers: []
	W1213 10:39:01.077296  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:01.077301  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:01.077359  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:01.102210  947325 cri.go:89] found id: ""
	I1213 10:39:01.102225  947325 logs.go:282] 0 containers: []
	W1213 10:39:01.102232  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:01.102237  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:01.102296  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:01.127343  947325 cri.go:89] found id: ""
	I1213 10:39:01.127357  947325 logs.go:282] 0 containers: []
	W1213 10:39:01.127364  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:01.127372  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:01.127384  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:01.193045  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:01.184559   14862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:01.185631   14862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:01.186444   14862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:01.187426   14862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:01.187971   14862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:01.184559   14862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:01.185631   14862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:01.186444   14862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:01.187426   14862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:01.187971   14862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:01.193056  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:39:01.193066  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:39:01.263652  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:39:01.263672  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:01.300661  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:01.300679  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:01.369051  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:01.369070  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:03.885575  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:03.895834  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:03.895898  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:03.926313  947325 cri.go:89] found id: ""
	I1213 10:39:03.926327  947325 logs.go:282] 0 containers: []
	W1213 10:39:03.926335  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:03.926339  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:39:03.926396  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:03.954240  947325 cri.go:89] found id: ""
	I1213 10:39:03.954254  947325 logs.go:282] 0 containers: []
	W1213 10:39:03.954261  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:39:03.954266  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:39:03.954324  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:03.984134  947325 cri.go:89] found id: ""
	I1213 10:39:03.984148  947325 logs.go:282] 0 containers: []
	W1213 10:39:03.984154  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:39:03.984159  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:03.984224  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:04.016879  947325 cri.go:89] found id: ""
	I1213 10:39:04.016894  947325 logs.go:282] 0 containers: []
	W1213 10:39:04.016901  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:04.016906  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:04.016965  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:04.041176  947325 cri.go:89] found id: ""
	I1213 10:39:04.041190  947325 logs.go:282] 0 containers: []
	W1213 10:39:04.041203  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:04.041208  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:04.041267  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:04.066331  947325 cri.go:89] found id: ""
	I1213 10:39:04.066345  947325 logs.go:282] 0 containers: []
	W1213 10:39:04.066351  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:04.066357  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:04.066415  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:04.090857  947325 cri.go:89] found id: ""
	I1213 10:39:04.090886  947325 logs.go:282] 0 containers: []
	W1213 10:39:04.090895  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:04.090903  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:04.090917  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:04.156570  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:04.156590  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:04.171387  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:04.171404  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:04.240263  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:04.226379   14974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:04.227108   14974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:04.228895   14974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:04.229425   14974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:04.230990   14974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:04.226379   14974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:04.227108   14974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:04.228895   14974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:04.229425   14974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:04.230990   14974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:04.240273  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:39:04.240285  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:39:04.319651  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:39:04.319672  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:06.852882  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:06.864121  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:06.864186  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:06.890733  947325 cri.go:89] found id: ""
	I1213 10:39:06.890748  947325 logs.go:282] 0 containers: []
	W1213 10:39:06.890756  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:06.890761  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:39:06.890819  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:06.917207  947325 cri.go:89] found id: ""
	I1213 10:39:06.917222  947325 logs.go:282] 0 containers: []
	W1213 10:39:06.917228  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:39:06.917234  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:39:06.917291  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:06.943186  947325 cri.go:89] found id: ""
	I1213 10:39:06.943201  947325 logs.go:282] 0 containers: []
	W1213 10:39:06.943208  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:39:06.943213  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:06.943278  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:06.973557  947325 cri.go:89] found id: ""
	I1213 10:39:06.973571  947325 logs.go:282] 0 containers: []
	W1213 10:39:06.973579  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:06.973584  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:06.973641  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:07.004748  947325 cri.go:89] found id: ""
	I1213 10:39:07.004770  947325 logs.go:282] 0 containers: []
	W1213 10:39:07.004778  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:07.004783  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:07.004851  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:07.030997  947325 cri.go:89] found id: ""
	I1213 10:39:07.031011  947325 logs.go:282] 0 containers: []
	W1213 10:39:07.031019  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:07.031024  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:07.031080  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:07.055983  947325 cri.go:89] found id: ""
	I1213 10:39:07.055997  947325 logs.go:282] 0 containers: []
	W1213 10:39:07.056004  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:07.056012  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:39:07.056024  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:07.084902  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:07.084919  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:07.153213  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:07.153232  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:07.168429  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:07.168446  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:07.232563  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:07.223603   15088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:07.224430   15088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:07.226089   15088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:07.226414   15088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:07.227903   15088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:07.223603   15088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:07.224430   15088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:07.226089   15088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:07.226414   15088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:07.227903   15088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:07.232586  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:39:07.232598  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:39:09.804561  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:09.814452  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:09.814514  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:09.840080  947325 cri.go:89] found id: ""
	I1213 10:39:09.840093  947325 logs.go:282] 0 containers: []
	W1213 10:39:09.840101  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:09.840106  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:39:09.840170  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:09.864603  947325 cri.go:89] found id: ""
	I1213 10:39:09.864617  947325 logs.go:282] 0 containers: []
	W1213 10:39:09.864625  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:39:09.864630  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:39:09.864697  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:09.889079  947325 cri.go:89] found id: ""
	I1213 10:39:09.889093  947325 logs.go:282] 0 containers: []
	W1213 10:39:09.889101  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:39:09.889106  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:09.889162  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:09.915869  947325 cri.go:89] found id: ""
	I1213 10:39:09.915883  947325 logs.go:282] 0 containers: []
	W1213 10:39:09.915890  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:09.915895  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:09.915954  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:09.945590  947325 cri.go:89] found id: ""
	I1213 10:39:09.945603  947325 logs.go:282] 0 containers: []
	W1213 10:39:09.945610  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:09.945618  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:09.945678  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:09.971712  947325 cri.go:89] found id: ""
	I1213 10:39:09.971725  947325 logs.go:282] 0 containers: []
	W1213 10:39:09.971732  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:09.971737  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:09.971798  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:10.003581  947325 cri.go:89] found id: ""
	I1213 10:39:10.003600  947325 logs.go:282] 0 containers: []
	W1213 10:39:10.003608  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:10.003618  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:39:10.003633  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:39:10.077821  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:39:10.077842  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:10.108375  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:10.108392  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:10.178400  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:10.178420  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:10.193608  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:10.193647  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:10.270772  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:10.262269   15196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:10.263276   15196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:10.265019   15196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:10.265328   15196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:10.266816   15196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:10.262269   15196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:10.263276   15196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:10.265019   15196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:10.265328   15196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:10.266816   15196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:12.771904  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:12.782049  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:12.782110  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:12.806673  947325 cri.go:89] found id: ""
	I1213 10:39:12.806687  947325 logs.go:282] 0 containers: []
	W1213 10:39:12.806695  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:12.806700  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:39:12.806757  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:12.835814  947325 cri.go:89] found id: ""
	I1213 10:39:12.835829  947325 logs.go:282] 0 containers: []
	W1213 10:39:12.835836  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:39:12.835841  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:39:12.835898  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:12.861712  947325 cri.go:89] found id: ""
	I1213 10:39:12.861727  947325 logs.go:282] 0 containers: []
	W1213 10:39:12.861734  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:39:12.861740  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:12.861804  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:12.886652  947325 cri.go:89] found id: ""
	I1213 10:39:12.886666  947325 logs.go:282] 0 containers: []
	W1213 10:39:12.886673  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:12.886678  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:12.886736  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:12.916010  947325 cri.go:89] found id: ""
	I1213 10:39:12.916025  947325 logs.go:282] 0 containers: []
	W1213 10:39:12.916032  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:12.916037  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:12.916100  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:12.946655  947325 cri.go:89] found id: ""
	I1213 10:39:12.946672  947325 logs.go:282] 0 containers: []
	W1213 10:39:12.946679  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:12.946684  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:12.946748  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:12.976684  947325 cri.go:89] found id: ""
	I1213 10:39:12.976698  947325 logs.go:282] 0 containers: []
	W1213 10:39:12.976705  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:12.976713  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:12.976726  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:13.043449  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:13.043472  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:13.059281  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:13.059299  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:13.122969  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:13.114879   15289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:13.115451   15289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:13.117021   15289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:13.117507   15289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:13.119078   15289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:13.114879   15289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:13.115451   15289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:13.117021   15289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:13.117507   15289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:13.119078   15289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:13.122981  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:39:13.122991  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:39:13.193301  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:39:13.193322  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:15.728135  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:15.739049  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:15.739110  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:15.764321  947325 cri.go:89] found id: ""
	I1213 10:39:15.764335  947325 logs.go:282] 0 containers: []
	W1213 10:39:15.764342  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:15.764348  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:39:15.764410  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:15.794053  947325 cri.go:89] found id: ""
	I1213 10:39:15.794068  947325 logs.go:282] 0 containers: []
	W1213 10:39:15.794077  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:39:15.794083  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:39:15.794138  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:15.819708  947325 cri.go:89] found id: ""
	I1213 10:39:15.819721  947325 logs.go:282] 0 containers: []
	W1213 10:39:15.819729  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:39:15.819734  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:15.819793  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:15.850534  947325 cri.go:89] found id: ""
	I1213 10:39:15.850548  947325 logs.go:282] 0 containers: []
	W1213 10:39:15.850556  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:15.850561  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:15.850618  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:15.879609  947325 cri.go:89] found id: ""
	I1213 10:39:15.879623  947325 logs.go:282] 0 containers: []
	W1213 10:39:15.879631  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:15.879636  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:15.879700  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:15.908873  947325 cri.go:89] found id: ""
	I1213 10:39:15.908887  947325 logs.go:282] 0 containers: []
	W1213 10:39:15.908895  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:15.908901  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:15.908967  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:15.936537  947325 cri.go:89] found id: ""
	I1213 10:39:15.936552  947325 logs.go:282] 0 containers: []
	W1213 10:39:15.936559  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:15.936567  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:39:15.936580  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:39:16.005668  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:39:16.005690  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:16.036804  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:16.036822  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:16.105762  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:16.105780  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:16.121830  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:16.121849  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:16.189324  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:16.180755   15407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:16.181397   15407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:16.183115   15407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:16.183776   15407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:16.185271   15407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:16.180755   15407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:16.181397   15407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:16.183115   15407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:16.183776   15407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:16.185271   15407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:18.689610  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:18.699729  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:18.699788  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:18.725083  947325 cri.go:89] found id: ""
	I1213 10:39:18.725097  947325 logs.go:282] 0 containers: []
	W1213 10:39:18.725105  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:18.725110  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:39:18.725165  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:18.751300  947325 cri.go:89] found id: ""
	I1213 10:39:18.751315  947325 logs.go:282] 0 containers: []
	W1213 10:39:18.751327  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:39:18.751333  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:39:18.751390  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:18.776458  947325 cri.go:89] found id: ""
	I1213 10:39:18.776473  947325 logs.go:282] 0 containers: []
	W1213 10:39:18.776480  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:39:18.776485  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:18.776543  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:18.801403  947325 cri.go:89] found id: ""
	I1213 10:39:18.801416  947325 logs.go:282] 0 containers: []
	W1213 10:39:18.801423  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:18.801428  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:18.801488  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:18.828035  947325 cri.go:89] found id: ""
	I1213 10:39:18.828053  947325 logs.go:282] 0 containers: []
	W1213 10:39:18.828060  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:18.828065  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:18.828122  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:18.852563  947325 cri.go:89] found id: ""
	I1213 10:39:18.852577  947325 logs.go:282] 0 containers: []
	W1213 10:39:18.852583  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:18.852589  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:18.852647  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:18.879882  947325 cri.go:89] found id: ""
	I1213 10:39:18.879897  947325 logs.go:282] 0 containers: []
	W1213 10:39:18.879904  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:18.879912  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:39:18.879922  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:18.913762  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:18.913788  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:18.978817  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:18.978840  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:18.994917  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:18.994936  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:19.062190  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:19.054243   15510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:19.054818   15510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:19.056322   15510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:19.056831   15510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:19.058280   15510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:19.054243   15510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:19.054818   15510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:19.056322   15510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:19.056831   15510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:19.058280   15510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:19.062201  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:39:19.062213  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:39:21.629331  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:21.639522  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:21.639593  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:21.664074  947325 cri.go:89] found id: ""
	I1213 10:39:21.664089  947325 logs.go:282] 0 containers: []
	W1213 10:39:21.664097  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:21.664102  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:39:21.664164  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:21.689123  947325 cri.go:89] found id: ""
	I1213 10:39:21.689136  947325 logs.go:282] 0 containers: []
	W1213 10:39:21.689144  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:39:21.689149  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:39:21.689206  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:21.713736  947325 cri.go:89] found id: ""
	I1213 10:39:21.713750  947325 logs.go:282] 0 containers: []
	W1213 10:39:21.713758  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:39:21.713762  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:21.713817  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:21.741978  947325 cri.go:89] found id: ""
	I1213 10:39:21.741991  947325 logs.go:282] 0 containers: []
	W1213 10:39:21.741999  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:21.742004  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:21.742063  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:21.767443  947325 cri.go:89] found id: ""
	I1213 10:39:21.767458  947325 logs.go:282] 0 containers: []
	W1213 10:39:21.767464  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:21.767469  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:21.767526  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:21.792419  947325 cri.go:89] found id: ""
	I1213 10:39:21.792434  947325 logs.go:282] 0 containers: []
	W1213 10:39:21.792457  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:21.792463  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:21.792529  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:21.821837  947325 cri.go:89] found id: ""
	I1213 10:39:21.821851  947325 logs.go:282] 0 containers: []
	W1213 10:39:21.821859  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:21.821867  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:21.821878  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:21.836299  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:21.836315  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:21.902625  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:21.894040   15602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:21.894485   15602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:21.896277   15602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:21.897017   15602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:21.898534   15602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:21.894040   15602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:21.894485   15602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:21.896277   15602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:21.897017   15602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:21.898534   15602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:21.902635  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:39:21.902646  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:39:21.971184  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:39:21.971204  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:22.003828  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:22.003847  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:24.576083  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:24.587706  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:24.587784  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:24.613620  947325 cri.go:89] found id: ""
	I1213 10:39:24.613635  947325 logs.go:282] 0 containers: []
	W1213 10:39:24.613643  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:24.613648  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:39:24.613706  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:24.639792  947325 cri.go:89] found id: ""
	I1213 10:39:24.639807  947325 logs.go:282] 0 containers: []
	W1213 10:39:24.639814  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:39:24.639820  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:39:24.639897  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:24.664551  947325 cri.go:89] found id: ""
	I1213 10:39:24.664566  947325 logs.go:282] 0 containers: []
	W1213 10:39:24.664573  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:39:24.664578  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:24.664638  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:24.689748  947325 cri.go:89] found id: ""
	I1213 10:39:24.689762  947325 logs.go:282] 0 containers: []
	W1213 10:39:24.689769  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:24.689774  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:24.689831  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:24.718617  947325 cri.go:89] found id: ""
	I1213 10:39:24.718632  947325 logs.go:282] 0 containers: []
	W1213 10:39:24.718639  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:24.718645  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:24.718702  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:24.748026  947325 cri.go:89] found id: ""
	I1213 10:39:24.748040  947325 logs.go:282] 0 containers: []
	W1213 10:39:24.748047  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:24.748052  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:24.748117  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:24.774049  947325 cri.go:89] found id: ""
	I1213 10:39:24.774063  947325 logs.go:282] 0 containers: []
	W1213 10:39:24.774070  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:24.774084  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:24.774095  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:24.840008  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:24.840029  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:24.855570  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:24.855587  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:24.924254  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:24.915904   15706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:24.916383   15706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:24.918059   15706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:24.918622   15706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:24.920297   15706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:24.915904   15706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:24.916383   15706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:24.918059   15706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:24.918622   15706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:24.920297   15706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:24.924266  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:39:24.924276  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:39:24.993620  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:39:24.993639  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:27.529665  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:27.539536  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:27.539597  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:27.564505  947325 cri.go:89] found id: ""
	I1213 10:39:27.564519  947325 logs.go:282] 0 containers: []
	W1213 10:39:27.564526  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:27.564531  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:39:27.564591  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:27.590383  947325 cri.go:89] found id: ""
	I1213 10:39:27.590397  947325 logs.go:282] 0 containers: []
	W1213 10:39:27.590405  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:39:27.590410  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:39:27.590474  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:27.615895  947325 cri.go:89] found id: ""
	I1213 10:39:27.615909  947325 logs.go:282] 0 containers: []
	W1213 10:39:27.615916  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:39:27.615921  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:27.615979  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:27.647656  947325 cri.go:89] found id: ""
	I1213 10:39:27.647670  947325 logs.go:282] 0 containers: []
	W1213 10:39:27.647678  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:27.647683  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:27.647741  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:27.673365  947325 cri.go:89] found id: ""
	I1213 10:39:27.673379  947325 logs.go:282] 0 containers: []
	W1213 10:39:27.673385  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:27.673390  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:27.673448  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:27.698006  947325 cri.go:89] found id: ""
	I1213 10:39:27.698020  947325 logs.go:282] 0 containers: []
	W1213 10:39:27.698028  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:27.698033  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:27.698096  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:27.722664  947325 cri.go:89] found id: ""
	I1213 10:39:27.722688  947325 logs.go:282] 0 containers: []
	W1213 10:39:27.722695  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:27.722702  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:27.722713  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:27.793605  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:27.793629  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:27.808404  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:27.808420  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:27.875877  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:27.866856   15813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:27.867426   15813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:27.869149   15813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:27.869660   15813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:27.871392   15813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:27.866856   15813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:27.867426   15813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:27.869149   15813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:27.869660   15813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:27.871392   15813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:27.875886  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:39:27.875898  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:39:27.944703  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:39:27.944723  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:30.475788  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:30.486929  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:30.486993  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:30.523816  947325 cri.go:89] found id: ""
	I1213 10:39:30.523830  947325 logs.go:282] 0 containers: []
	W1213 10:39:30.523837  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:30.523843  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:39:30.523899  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:30.556573  947325 cri.go:89] found id: ""
	I1213 10:39:30.556586  947325 logs.go:282] 0 containers: []
	W1213 10:39:30.556593  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:39:30.556598  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:39:30.556666  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:30.581886  947325 cri.go:89] found id: ""
	I1213 10:39:30.581900  947325 logs.go:282] 0 containers: []
	W1213 10:39:30.581907  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:39:30.581912  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:30.581972  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:30.611853  947325 cri.go:89] found id: ""
	I1213 10:39:30.611878  947325 logs.go:282] 0 containers: []
	W1213 10:39:30.611886  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:30.611891  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:30.611959  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:30.636125  947325 cri.go:89] found id: ""
	I1213 10:39:30.636140  947325 logs.go:282] 0 containers: []
	W1213 10:39:30.636147  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:30.636152  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:30.636213  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:30.661404  947325 cri.go:89] found id: ""
	I1213 10:39:30.661418  947325 logs.go:282] 0 containers: []
	W1213 10:39:30.661425  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:30.661430  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:30.661490  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:30.686369  947325 cri.go:89] found id: ""
	I1213 10:39:30.686382  947325 logs.go:282] 0 containers: []
	W1213 10:39:30.686390  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:30.686397  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:30.686408  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:30.752100  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:30.752120  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:30.766471  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:30.766487  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:30.831347  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:30.823244   15917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:30.823892   15917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:30.825523   15917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:30.826100   15917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:30.827547   15917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:30.823244   15917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:30.823892   15917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:30.825523   15917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:30.826100   15917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:30.827547   15917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:30.831356  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:39:30.831367  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:39:30.899699  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:39:30.899718  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:33.428636  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:33.438752  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:33.438815  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:33.467201  947325 cri.go:89] found id: ""
	I1213 10:39:33.467215  947325 logs.go:282] 0 containers: []
	W1213 10:39:33.467222  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:33.467227  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:39:33.467285  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:33.496554  947325 cri.go:89] found id: ""
	I1213 10:39:33.496570  947325 logs.go:282] 0 containers: []
	W1213 10:39:33.496577  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:39:33.496582  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:39:33.496650  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:33.523431  947325 cri.go:89] found id: ""
	I1213 10:39:33.523446  947325 logs.go:282] 0 containers: []
	W1213 10:39:33.523453  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:39:33.523457  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:33.523517  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:33.559332  947325 cri.go:89] found id: ""
	I1213 10:39:33.559346  947325 logs.go:282] 0 containers: []
	W1213 10:39:33.559353  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:33.559358  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:33.559413  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:33.587632  947325 cri.go:89] found id: ""
	I1213 10:39:33.587645  947325 logs.go:282] 0 containers: []
	W1213 10:39:33.587653  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:33.587658  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:33.587714  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:33.612223  947325 cri.go:89] found id: ""
	I1213 10:39:33.612237  947325 logs.go:282] 0 containers: []
	W1213 10:39:33.612266  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:33.612271  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:33.612339  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:33.636321  947325 cri.go:89] found id: ""
	I1213 10:39:33.636344  947325 logs.go:282] 0 containers: []
	W1213 10:39:33.636351  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:33.636359  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:33.636373  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:33.650977  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:33.650993  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:33.710121  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:33.702522   16018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:33.703286   16018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:33.704577   16018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:33.705062   16018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:33.706497   16018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:33.702522   16018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:33.703286   16018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:33.704577   16018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:33.705062   16018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:33.706497   16018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:33.710132  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:39:33.710143  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:39:33.781081  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:39:33.781101  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:33.810866  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:33.810882  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:36.380753  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:36.390598  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:36.390659  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:36.416068  947325 cri.go:89] found id: ""
	I1213 10:39:36.416083  947325 logs.go:282] 0 containers: []
	W1213 10:39:36.416090  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:36.416097  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:39:36.416156  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:36.443939  947325 cri.go:89] found id: ""
	I1213 10:39:36.443954  947325 logs.go:282] 0 containers: []
	W1213 10:39:36.443968  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:39:36.443973  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:39:36.444031  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:36.468690  947325 cri.go:89] found id: ""
	I1213 10:39:36.468704  947325 logs.go:282] 0 containers: []
	W1213 10:39:36.468711  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:39:36.468716  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:36.468772  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:36.498941  947325 cri.go:89] found id: ""
	I1213 10:39:36.498955  947325 logs.go:282] 0 containers: []
	W1213 10:39:36.498962  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:36.498967  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:36.499033  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:36.538080  947325 cri.go:89] found id: ""
	I1213 10:39:36.538103  947325 logs.go:282] 0 containers: []
	W1213 10:39:36.538111  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:36.538116  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:36.538179  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:36.563132  947325 cri.go:89] found id: ""
	I1213 10:39:36.563147  947325 logs.go:282] 0 containers: []
	W1213 10:39:36.563154  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:36.563160  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:36.563217  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:36.588756  947325 cri.go:89] found id: ""
	I1213 10:39:36.588780  947325 logs.go:282] 0 containers: []
	W1213 10:39:36.588789  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:36.588797  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:36.588812  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:36.653330  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:36.653350  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:36.670404  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:36.670421  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:36.742327  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:36.732861   16126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:36.733828   16126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:36.734900   16126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:36.736542   16126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:36.737273   16126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:36.732861   16126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:36.733828   16126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:36.734900   16126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:36.736542   16126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:36.737273   16126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:36.742339  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:39:36.742350  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:39:36.811143  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:39:36.811163  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:39.339643  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:39.349836  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:39.349897  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:39.375161  947325 cri.go:89] found id: ""
	I1213 10:39:39.375175  947325 logs.go:282] 0 containers: []
	W1213 10:39:39.375194  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:39.375200  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:39:39.375262  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:39.400364  947325 cri.go:89] found id: ""
	I1213 10:39:39.400393  947325 logs.go:282] 0 containers: []
	W1213 10:39:39.400402  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:39:39.400407  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:39:39.400473  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:39.427167  947325 cri.go:89] found id: ""
	I1213 10:39:39.427182  947325 logs.go:282] 0 containers: []
	W1213 10:39:39.427189  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:39:39.427195  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:39.427270  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:39.455933  947325 cri.go:89] found id: ""
	I1213 10:39:39.455960  947325 logs.go:282] 0 containers: []
	W1213 10:39:39.455967  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:39.455973  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:39.456041  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:39.489827  947325 cri.go:89] found id: ""
	I1213 10:39:39.489840  947325 logs.go:282] 0 containers: []
	W1213 10:39:39.489847  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:39.489852  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:39.489920  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:39.529776  947325 cri.go:89] found id: ""
	I1213 10:39:39.529790  947325 logs.go:282] 0 containers: []
	W1213 10:39:39.529797  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:39.529814  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:39.529890  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:39.558531  947325 cri.go:89] found id: ""
	I1213 10:39:39.558545  947325 logs.go:282] 0 containers: []
	W1213 10:39:39.558552  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:39.558560  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:39.558571  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:39.625366  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:39.625384  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:39.640509  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:39.640525  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:39.706928  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:39.697596   16228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:39.698528   16228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:39.700141   16228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:39.700637   16228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:39.702492   16228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:39.697596   16228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:39.698528   16228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:39.700141   16228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:39.700637   16228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:39.702492   16228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:39.706940  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:39:39.706952  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:39:39.779211  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:39:39.779231  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:42.308782  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:42.319639  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:42.319702  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:42.346935  947325 cri.go:89] found id: ""
	I1213 10:39:42.346959  947325 logs.go:282] 0 containers: []
	W1213 10:39:42.346970  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:42.346977  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:39:42.347038  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:42.378296  947325 cri.go:89] found id: ""
	I1213 10:39:42.378310  947325 logs.go:282] 0 containers: []
	W1213 10:39:42.378316  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:39:42.378321  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:39:42.378381  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:42.403824  947325 cri.go:89] found id: ""
	I1213 10:39:42.403839  947325 logs.go:282] 0 containers: []
	W1213 10:39:42.403845  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:39:42.403850  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:42.403919  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:42.429874  947325 cri.go:89] found id: ""
	I1213 10:39:42.429890  947325 logs.go:282] 0 containers: []
	W1213 10:39:42.429898  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:42.429905  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:42.429978  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:42.457189  947325 cri.go:89] found id: ""
	I1213 10:39:42.457203  947325 logs.go:282] 0 containers: []
	W1213 10:39:42.457211  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:42.457216  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:42.457277  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:42.485373  947325 cri.go:89] found id: ""
	I1213 10:39:42.485389  947325 logs.go:282] 0 containers: []
	W1213 10:39:42.485400  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:42.485429  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:42.485500  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:42.518706  947325 cri.go:89] found id: ""
	I1213 10:39:42.518720  947325 logs.go:282] 0 containers: []
	W1213 10:39:42.518728  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:42.518735  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:42.518746  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:42.534645  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:42.534662  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:42.606481  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:42.598265   16332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:42.599171   16332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:42.600937   16332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:42.601258   16332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:42.602752   16332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:42.598265   16332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:42.599171   16332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:42.600937   16332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:42.601258   16332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:42.602752   16332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:42.606491  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:39:42.606501  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:39:42.673511  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:39:42.673532  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:42.702426  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:42.702443  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:45.267475  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:45.280530  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:45.280751  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:45.313332  947325 cri.go:89] found id: ""
	I1213 10:39:45.313346  947325 logs.go:282] 0 containers: []
	W1213 10:39:45.313354  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:45.313359  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:39:45.313427  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:45.342213  947325 cri.go:89] found id: ""
	I1213 10:39:45.342227  947325 logs.go:282] 0 containers: []
	W1213 10:39:45.342234  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:39:45.342239  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:39:45.342297  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:45.371108  947325 cri.go:89] found id: ""
	I1213 10:39:45.371123  947325 logs.go:282] 0 containers: []
	W1213 10:39:45.371130  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:39:45.371137  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:45.371197  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:45.400706  947325 cri.go:89] found id: ""
	I1213 10:39:45.400720  947325 logs.go:282] 0 containers: []
	W1213 10:39:45.400728  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:45.400735  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:45.400805  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:45.428233  947325 cri.go:89] found id: ""
	I1213 10:39:45.428258  947325 logs.go:282] 0 containers: []
	W1213 10:39:45.428266  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:45.428271  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:45.428341  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:45.458995  947325 cri.go:89] found id: ""
	I1213 10:39:45.459010  947325 logs.go:282] 0 containers: []
	W1213 10:39:45.459017  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:45.459023  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:45.459081  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:45.494206  947325 cri.go:89] found id: ""
	I1213 10:39:45.494220  947325 logs.go:282] 0 containers: []
	W1213 10:39:45.494227  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:45.494235  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:39:45.494246  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:39:45.575280  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:39:45.575299  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:45.605803  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:45.605820  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:45.676085  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:45.676104  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:45.691072  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:45.691091  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:45.756808  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:45.747515   16453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:45.748188   16453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:45.750879   16453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:45.751438   16453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:45.752940   16453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:45.747515   16453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:45.748188   16453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:45.750879   16453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:45.751438   16453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:45.752940   16453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:48.257078  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:48.266893  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:48.266954  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:48.292251  947325 cri.go:89] found id: ""
	I1213 10:39:48.292265  947325 logs.go:282] 0 containers: []
	W1213 10:39:48.292272  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:48.292288  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:39:48.292345  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:48.318109  947325 cri.go:89] found id: ""
	I1213 10:39:48.318134  947325 logs.go:282] 0 containers: []
	W1213 10:39:48.318142  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:39:48.318147  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:39:48.318207  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:48.344874  947325 cri.go:89] found id: ""
	I1213 10:39:48.344888  947325 logs.go:282] 0 containers: []
	W1213 10:39:48.344896  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:39:48.344901  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:48.344966  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:48.372878  947325 cri.go:89] found id: ""
	I1213 10:39:48.372893  947325 logs.go:282] 0 containers: []
	W1213 10:39:48.372900  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:48.372906  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:48.372967  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:48.399505  947325 cri.go:89] found id: ""
	I1213 10:39:48.399517  947325 logs.go:282] 0 containers: []
	W1213 10:39:48.399525  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:48.399530  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:48.399591  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:48.426096  947325 cri.go:89] found id: ""
	I1213 10:39:48.426110  947325 logs.go:282] 0 containers: []
	W1213 10:39:48.426117  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:48.426123  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:48.426182  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:48.452372  947325 cri.go:89] found id: ""
	I1213 10:39:48.452387  947325 logs.go:282] 0 containers: []
	W1213 10:39:48.452394  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:48.452402  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:39:48.452413  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:39:48.535530  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:39:48.535558  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:48.565498  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:48.565516  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:48.638609  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:48.638630  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:48.653725  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:48.653743  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:48.724088  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:48.715285   16557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:48.715997   16557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:48.717752   16557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:48.718374   16557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:48.719911   16557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:48.715285   16557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:48.715997   16557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:48.717752   16557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:48.718374   16557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:48.719911   16557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:51.224632  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:51.234995  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:51.235060  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:51.260920  947325 cri.go:89] found id: ""
	I1213 10:39:51.260934  947325 logs.go:282] 0 containers: []
	W1213 10:39:51.260941  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:51.260946  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:39:51.261010  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:51.288308  947325 cri.go:89] found id: ""
	I1213 10:39:51.288323  947325 logs.go:282] 0 containers: []
	W1213 10:39:51.288330  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:39:51.288335  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:39:51.288395  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:51.313237  947325 cri.go:89] found id: ""
	I1213 10:39:51.313251  947325 logs.go:282] 0 containers: []
	W1213 10:39:51.313258  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:39:51.313263  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:51.313322  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:51.340832  947325 cri.go:89] found id: ""
	I1213 10:39:51.340845  947325 logs.go:282] 0 containers: []
	W1213 10:39:51.340852  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:51.340857  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:51.340913  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:51.367975  947325 cri.go:89] found id: ""
	I1213 10:39:51.367989  947325 logs.go:282] 0 containers: []
	W1213 10:39:51.367996  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:51.368000  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:51.368059  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:51.393715  947325 cri.go:89] found id: ""
	I1213 10:39:51.393728  947325 logs.go:282] 0 containers: []
	W1213 10:39:51.393736  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:51.393741  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:51.393803  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:51.422317  947325 cri.go:89] found id: ""
	I1213 10:39:51.422331  947325 logs.go:282] 0 containers: []
	W1213 10:39:51.422338  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:51.422345  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:39:51.422356  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:39:51.492559  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:39:51.492577  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:51.531769  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:51.531786  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:51.599294  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:51.599316  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:51.615318  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:51.615334  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:51.678990  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:51.669927   16661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:51.670629   16661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:51.672315   16661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:51.672978   16661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:51.674480   16661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:51.669927   16661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:51.670629   16661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:51.672315   16661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:51.672978   16661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:51.674480   16661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:54.180647  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:54.190751  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:54.190817  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:54.216105  947325 cri.go:89] found id: ""
	I1213 10:39:54.216119  947325 logs.go:282] 0 containers: []
	W1213 10:39:54.216126  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:54.216131  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:39:54.216188  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:54.245934  947325 cri.go:89] found id: ""
	I1213 10:39:54.245948  947325 logs.go:282] 0 containers: []
	W1213 10:39:54.245955  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:39:54.245960  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:39:54.246019  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:54.272786  947325 cri.go:89] found id: ""
	I1213 10:39:54.272800  947325 logs.go:282] 0 containers: []
	W1213 10:39:54.272807  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:39:54.272812  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:54.272871  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:54.298724  947325 cri.go:89] found id: ""
	I1213 10:39:54.298738  947325 logs.go:282] 0 containers: []
	W1213 10:39:54.298745  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:54.298750  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:54.298814  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:54.324500  947325 cri.go:89] found id: ""
	I1213 10:39:54.324514  947325 logs.go:282] 0 containers: []
	W1213 10:39:54.324522  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:54.324533  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:54.324647  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:54.351350  947325 cri.go:89] found id: ""
	I1213 10:39:54.351364  947325 logs.go:282] 0 containers: []
	W1213 10:39:54.351372  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:54.351377  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:54.351439  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:54.376698  947325 cri.go:89] found id: ""
	I1213 10:39:54.376712  947325 logs.go:282] 0 containers: []
	W1213 10:39:54.376720  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:54.376729  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:39:54.376740  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:54.408737  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:54.408753  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:54.475785  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:54.475805  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:54.498578  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:54.498595  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:54.571508  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:54.562841   16763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:54.563554   16763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:54.565208   16763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:54.565888   16763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:54.567536   16763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:54.562841   16763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:54.563554   16763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:54.565208   16763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:54.565888   16763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:54.567536   16763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:54.571518  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:39:54.571529  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:39:57.141570  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:57.151660  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:57.151725  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:57.177208  947325 cri.go:89] found id: ""
	I1213 10:39:57.177222  947325 logs.go:282] 0 containers: []
	W1213 10:39:57.177230  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:57.177235  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:39:57.177305  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:57.202689  947325 cri.go:89] found id: ""
	I1213 10:39:57.202703  947325 logs.go:282] 0 containers: []
	W1213 10:39:57.202710  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:39:57.202715  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:39:57.202778  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:57.227567  947325 cri.go:89] found id: ""
	I1213 10:39:57.227581  947325 logs.go:282] 0 containers: []
	W1213 10:39:57.227588  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:39:57.227593  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:57.227651  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:57.257034  947325 cri.go:89] found id: ""
	I1213 10:39:57.257048  947325 logs.go:282] 0 containers: []
	W1213 10:39:57.257056  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:57.257061  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:57.257118  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:57.282238  947325 cri.go:89] found id: ""
	I1213 10:39:57.282251  947325 logs.go:282] 0 containers: []
	W1213 10:39:57.282258  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:57.282263  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:57.282321  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:57.308327  947325 cri.go:89] found id: ""
	I1213 10:39:57.308341  947325 logs.go:282] 0 containers: []
	W1213 10:39:57.308348  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:57.308353  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:57.308412  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:57.334174  947325 cri.go:89] found id: ""
	I1213 10:39:57.334188  947325 logs.go:282] 0 containers: []
	W1213 10:39:57.334196  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:57.334203  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:39:57.334214  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:57.365982  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:57.365997  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:57.438986  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:57.439007  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:57.454096  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:57.454113  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:57.539317  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:57.526904   16863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:57.527801   16863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:57.529755   16863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:57.530529   16863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:57.532282   16863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:57.526904   16863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:57.527801   16863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:57.529755   16863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:57.530529   16863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:57.532282   16863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:57.539330  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:39:57.539341  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:40:00.111211  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:40:00.161991  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:40:00.162066  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:40:00.278257  947325 cri.go:89] found id: ""
	I1213 10:40:00.278273  947325 logs.go:282] 0 containers: []
	W1213 10:40:00.278282  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:40:00.278288  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:40:00.278371  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:40:00.356421  947325 cri.go:89] found id: ""
	I1213 10:40:00.356441  947325 logs.go:282] 0 containers: []
	W1213 10:40:00.356449  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:40:00.356459  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:40:00.356542  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:40:00.426855  947325 cri.go:89] found id: ""
	I1213 10:40:00.426872  947325 logs.go:282] 0 containers: []
	W1213 10:40:00.426880  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:40:00.426887  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:40:00.426962  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:40:00.484844  947325 cri.go:89] found id: ""
	I1213 10:40:00.484860  947325 logs.go:282] 0 containers: []
	W1213 10:40:00.484868  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:40:00.484874  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:40:00.484945  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:40:00.602425  947325 cri.go:89] found id: ""
	I1213 10:40:00.602444  947325 logs.go:282] 0 containers: []
	W1213 10:40:00.602452  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:40:00.602465  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:40:00.602545  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:40:00.682272  947325 cri.go:89] found id: ""
	I1213 10:40:00.682288  947325 logs.go:282] 0 containers: []
	W1213 10:40:00.682297  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:40:00.682303  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:40:00.682377  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:40:00.717455  947325 cri.go:89] found id: ""
	I1213 10:40:00.717470  947325 logs.go:282] 0 containers: []
	W1213 10:40:00.717478  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:40:00.717486  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:40:00.717498  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:40:00.751785  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:40:00.751805  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:40:00.823234  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:40:00.823256  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:40:00.840067  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:40:00.840092  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:40:00.911938  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:40:00.902907   16969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:00.903639   16969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:00.905343   16969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:00.905895   16969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:00.907562   16969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:40:00.902907   16969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:00.903639   16969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:00.905343   16969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:00.905895   16969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:00.907562   16969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:40:00.911995  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:40:00.912005  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:40:03.480277  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:40:03.490777  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:40:03.490839  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:40:03.516535  947325 cri.go:89] found id: ""
	I1213 10:40:03.516549  947325 logs.go:282] 0 containers: []
	W1213 10:40:03.516556  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:40:03.516561  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:40:03.516630  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:40:03.543061  947325 cri.go:89] found id: ""
	I1213 10:40:03.543075  947325 logs.go:282] 0 containers: []
	W1213 10:40:03.543083  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:40:03.543088  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:40:03.543149  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:40:03.569136  947325 cri.go:89] found id: ""
	I1213 10:40:03.569150  947325 logs.go:282] 0 containers: []
	W1213 10:40:03.569158  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:40:03.569163  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:40:03.569222  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:40:03.596417  947325 cri.go:89] found id: ""
	I1213 10:40:03.596431  947325 logs.go:282] 0 containers: []
	W1213 10:40:03.596438  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:40:03.596443  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:40:03.596510  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:40:03.624475  947325 cri.go:89] found id: ""
	I1213 10:40:03.624489  947325 logs.go:282] 0 containers: []
	W1213 10:40:03.624496  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:40:03.624501  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:40:03.624560  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:40:03.650480  947325 cri.go:89] found id: ""
	I1213 10:40:03.650495  947325 logs.go:282] 0 containers: []
	W1213 10:40:03.650509  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:40:03.650515  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:40:03.650574  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:40:03.679244  947325 cri.go:89] found id: ""
	I1213 10:40:03.679258  947325 logs.go:282] 0 containers: []
	W1213 10:40:03.679265  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:40:03.679272  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:40:03.679283  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:40:03.752004  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:40:03.742428   17052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:03.743353   17052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:03.744776   17052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:03.745390   17052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:03.747857   17052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:40:03.742428   17052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:03.743353   17052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:03.744776   17052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:03.745390   17052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:03.747857   17052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:40:03.752014  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:40:03.752025  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:40:03.833866  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:40:03.833888  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:40:03.863364  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:40:03.863381  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:40:03.930202  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:40:03.930230  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:40:06.446850  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:40:06.456936  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:40:06.457005  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:40:06.481624  947325 cri.go:89] found id: ""
	I1213 10:40:06.481638  947325 logs.go:282] 0 containers: []
	W1213 10:40:06.481645  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:40:06.481653  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:40:06.481709  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:40:06.510312  947325 cri.go:89] found id: ""
	I1213 10:40:06.510335  947325 logs.go:282] 0 containers: []
	W1213 10:40:06.510342  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:40:06.510347  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:40:06.510408  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:40:06.541422  947325 cri.go:89] found id: ""
	I1213 10:40:06.541439  947325 logs.go:282] 0 containers: []
	W1213 10:40:06.541446  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:40:06.541451  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:40:06.541511  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:40:06.567745  947325 cri.go:89] found id: ""
	I1213 10:40:06.567759  947325 logs.go:282] 0 containers: []
	W1213 10:40:06.567766  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:40:06.567771  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:40:06.567827  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:40:06.593070  947325 cri.go:89] found id: ""
	I1213 10:40:06.593085  947325 logs.go:282] 0 containers: []
	W1213 10:40:06.593092  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:40:06.593097  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:40:06.593159  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:40:06.620092  947325 cri.go:89] found id: ""
	I1213 10:40:06.620106  947325 logs.go:282] 0 containers: []
	W1213 10:40:06.620114  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:40:06.620119  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:40:06.620180  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:40:06.646655  947325 cri.go:89] found id: ""
	I1213 10:40:06.646668  947325 logs.go:282] 0 containers: []
	W1213 10:40:06.646676  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:40:06.646684  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:40:06.646695  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:40:06.713111  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:40:06.713133  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:40:06.729687  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:40:06.729703  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:40:06.811226  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:40:06.802029   17169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:06.803349   17169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:06.804038   17169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:06.805655   17169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:06.806271   17169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:40:06.802029   17169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:06.803349   17169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:06.804038   17169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:06.805655   17169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:06.806271   17169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:40:06.811237  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:40:06.811252  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:40:06.879267  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:40:06.879290  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:40:09.408425  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:40:09.418903  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:40:09.418973  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:40:09.445864  947325 cri.go:89] found id: ""
	I1213 10:40:09.445878  947325 logs.go:282] 0 containers: []
	W1213 10:40:09.445886  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:40:09.445891  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:40:09.445953  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:40:09.477028  947325 cri.go:89] found id: ""
	I1213 10:40:09.477042  947325 logs.go:282] 0 containers: []
	W1213 10:40:09.477049  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:40:09.477054  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:40:09.477114  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:40:09.503739  947325 cri.go:89] found id: ""
	I1213 10:40:09.503754  947325 logs.go:282] 0 containers: []
	W1213 10:40:09.503761  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:40:09.503766  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:40:09.503830  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:40:09.530433  947325 cri.go:89] found id: ""
	I1213 10:40:09.530449  947325 logs.go:282] 0 containers: []
	W1213 10:40:09.530458  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:40:09.530463  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:40:09.530527  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:40:09.557391  947325 cri.go:89] found id: ""
	I1213 10:40:09.557406  947325 logs.go:282] 0 containers: []
	W1213 10:40:09.557413  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:40:09.557424  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:40:09.557488  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:40:09.583991  947325 cri.go:89] found id: ""
	I1213 10:40:09.584006  947325 logs.go:282] 0 containers: []
	W1213 10:40:09.584014  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:40:09.584020  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:40:09.584084  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:40:09.610671  947325 cri.go:89] found id: ""
	I1213 10:40:09.610685  947325 logs.go:282] 0 containers: []
	W1213 10:40:09.610692  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:40:09.610701  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:40:09.610712  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:40:09.626022  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:40:09.626039  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:40:09.693054  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:40:09.684419   17265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:09.685112   17265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:09.686796   17265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:09.687319   17265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:09.689067   17265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:40:09.684419   17265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:09.685112   17265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:09.686796   17265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:09.687319   17265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:09.689067   17265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:40:09.693064  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:40:09.693077  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:40:09.767666  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:40:09.767694  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:40:09.799935  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:40:09.799953  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:40:12.366822  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:40:12.377676  947325 kubeadm.go:602] duration metric: took 4m2.920144703s to restartPrimaryControlPlane
	W1213 10:40:12.377740  947325 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1213 10:40:12.377825  947325 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1213 10:40:12.791103  947325 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 10:40:12.803671  947325 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 10:40:12.811334  947325 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 10:40:12.811389  947325 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 10:40:12.818912  947325 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 10:40:12.818922  947325 kubeadm.go:158] found existing configuration files:
	
	I1213 10:40:12.818976  947325 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1213 10:40:12.826986  947325 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 10:40:12.827043  947325 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 10:40:12.834424  947325 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1213 10:40:12.842053  947325 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 10:40:12.842110  947325 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 10:40:12.849745  947325 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1213 10:40:12.857650  947325 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 10:40:12.857707  947325 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 10:40:12.865223  947325 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1213 10:40:12.873255  947325 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 10:40:12.873315  947325 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 10:40:12.881016  947325 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 10:40:12.922045  947325 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 10:40:12.922134  947325 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 10:40:13.007876  947325 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 10:40:13.007942  947325 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 10:40:13.007977  947325 kubeadm.go:319] OS: Linux
	I1213 10:40:13.008021  947325 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 10:40:13.008068  947325 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 10:40:13.008115  947325 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 10:40:13.008162  947325 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 10:40:13.008210  947325 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 10:40:13.008257  947325 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 10:40:13.008305  947325 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 10:40:13.008352  947325 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 10:40:13.008397  947325 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 10:40:13.081346  947325 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 10:40:13.081472  947325 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 10:40:13.081605  947325 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 10:40:13.089963  947325 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 10:40:13.093587  947325 out.go:252]   - Generating certificates and keys ...
	I1213 10:40:13.093699  947325 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 10:40:13.093775  947325 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 10:40:13.093883  947325 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 10:40:13.093964  947325 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1213 10:40:13.094047  947325 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 10:40:13.094113  947325 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1213 10:40:13.094188  947325 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1213 10:40:13.094255  947325 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1213 10:40:13.094334  947325 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 10:40:13.094412  947325 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 10:40:13.094451  947325 kubeadm.go:319] [certs] Using the existing "sa" key
	I1213 10:40:13.094511  947325 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 10:40:13.317953  947325 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 10:40:13.628016  947325 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 10:40:13.956341  947325 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 10:40:14.391056  947325 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 10:40:14.663244  947325 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 10:40:14.663900  947325 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 10:40:14.666642  947325 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 10:40:14.670022  947325 out.go:252]   - Booting up control plane ...
	I1213 10:40:14.670125  947325 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 10:40:14.670202  947325 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 10:40:14.670267  947325 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 10:40:14.685196  947325 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 10:40:14.685574  947325 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 10:40:14.692785  947325 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 10:40:14.693070  947325 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 10:40:14.693112  947325 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 10:40:14.837275  947325 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 10:40:14.837410  947325 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 10:44:14.836045  947325 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00023703s
	I1213 10:44:14.836071  947325 kubeadm.go:319] 
	I1213 10:44:14.836328  947325 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 10:44:14.836386  947325 kubeadm.go:319] 	- The kubelet is not running
	I1213 10:44:14.836565  947325 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 10:44:14.836573  947325 kubeadm.go:319] 
	I1213 10:44:14.836751  947325 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 10:44:14.837048  947325 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 10:44:14.837101  947325 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 10:44:14.837105  947325 kubeadm.go:319] 
	I1213 10:44:14.841975  947325 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 10:44:14.842445  947325 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 10:44:14.842565  947325 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 10:44:14.842818  947325 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1213 10:44:14.842823  947325 kubeadm.go:319] 
	I1213 10:44:14.842900  947325 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1213 10:44:14.842999  947325 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00023703s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1213 10:44:14.843084  947325 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1213 10:44:15.255135  947325 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 10:44:15.268065  947325 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 10:44:15.268119  947325 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 10:44:15.276039  947325 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 10:44:15.276049  947325 kubeadm.go:158] found existing configuration files:
	
	I1213 10:44:15.276099  947325 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1213 10:44:15.283960  947325 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 10:44:15.284017  947325 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 10:44:15.291479  947325 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1213 10:44:15.299068  947325 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 10:44:15.299125  947325 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 10:44:15.306780  947325 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1213 10:44:15.314429  947325 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 10:44:15.314486  947325 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 10:44:15.321813  947325 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1213 10:44:15.329258  947325 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 10:44:15.329313  947325 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 10:44:15.337109  947325 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 10:44:15.375292  947325 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 10:44:15.375341  947325 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 10:44:15.450506  947325 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 10:44:15.450577  947325 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 10:44:15.450617  947325 kubeadm.go:319] OS: Linux
	I1213 10:44:15.450661  947325 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 10:44:15.450708  947325 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 10:44:15.450754  947325 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 10:44:15.450800  947325 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 10:44:15.450849  947325 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 10:44:15.450900  947325 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 10:44:15.450944  947325 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 10:44:15.450990  947325 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 10:44:15.451035  947325 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 10:44:15.530795  947325 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 10:44:15.530912  947325 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 10:44:15.531008  947325 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 10:44:15.540322  947325 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 10:44:15.543642  947325 out.go:252]   - Generating certificates and keys ...
	I1213 10:44:15.543721  947325 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 10:44:15.543784  947325 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 10:44:15.543859  947325 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 10:44:15.543918  947325 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1213 10:44:15.543987  947325 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 10:44:15.544039  947325 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1213 10:44:15.544101  947325 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1213 10:44:15.544161  947325 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1213 10:44:15.544244  947325 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 10:44:15.544319  947325 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 10:44:15.544391  947325 kubeadm.go:319] [certs] Using the existing "sa" key
	I1213 10:44:15.544447  947325 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 10:44:15.880761  947325 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 10:44:16.054505  947325 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 10:44:16.157902  947325 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 10:44:16.328847  947325 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 10:44:16.490203  947325 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 10:44:16.491055  947325 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 10:44:16.493708  947325 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 10:44:16.496861  947325 out.go:252]   - Booting up control plane ...
	I1213 10:44:16.496957  947325 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 10:44:16.497033  947325 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 10:44:16.497100  947325 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 10:44:16.511097  947325 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 10:44:16.511202  947325 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 10:44:16.518811  947325 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 10:44:16.519350  947325 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 10:44:16.519584  947325 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 10:44:16.652368  947325 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 10:44:16.652480  947325 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 10:48:16.653403  947325 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001096364s
	I1213 10:48:16.653421  947325 kubeadm.go:319] 
	I1213 10:48:16.653477  947325 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 10:48:16.653510  947325 kubeadm.go:319] 	- The kubelet is not running
	I1213 10:48:16.653633  947325 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 10:48:16.653637  947325 kubeadm.go:319] 
	I1213 10:48:16.653740  947325 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 10:48:16.653771  947325 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 10:48:16.653801  947325 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 10:48:16.653804  947325 kubeadm.go:319] 
	I1213 10:48:16.659039  947325 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 10:48:16.659521  947325 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 10:48:16.659636  947325 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 10:48:16.659899  947325 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1213 10:48:16.659915  947325 kubeadm.go:319] 
	I1213 10:48:16.659983  947325 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1213 10:48:16.660039  947325 kubeadm.go:403] duration metric: took 12m7.242563635s to StartCluster
	I1213 10:48:16.660068  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:48:16.660127  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:48:16.684783  947325 cri.go:89] found id: ""
	I1213 10:48:16.684798  947325 logs.go:282] 0 containers: []
	W1213 10:48:16.684805  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:48:16.684810  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:48:16.684871  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:48:16.709976  947325 cri.go:89] found id: ""
	I1213 10:48:16.709990  947325 logs.go:282] 0 containers: []
	W1213 10:48:16.709997  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:48:16.710001  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:48:16.710060  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:48:16.735338  947325 cri.go:89] found id: ""
	I1213 10:48:16.735351  947325 logs.go:282] 0 containers: []
	W1213 10:48:16.735358  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:48:16.735363  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:48:16.735422  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:48:16.760771  947325 cri.go:89] found id: ""
	I1213 10:48:16.760784  947325 logs.go:282] 0 containers: []
	W1213 10:48:16.760791  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:48:16.760797  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:48:16.760851  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:48:16.785193  947325 cri.go:89] found id: ""
	I1213 10:48:16.785207  947325 logs.go:282] 0 containers: []
	W1213 10:48:16.785215  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:48:16.785220  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:48:16.785280  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:48:16.811008  947325 cri.go:89] found id: ""
	I1213 10:48:16.811022  947325 logs.go:282] 0 containers: []
	W1213 10:48:16.811029  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:48:16.811034  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:48:16.811093  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:48:16.840077  947325 cri.go:89] found id: ""
	I1213 10:48:16.840092  947325 logs.go:282] 0 containers: []
	W1213 10:48:16.840099  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:48:16.840119  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:48:16.840130  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:48:16.909363  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:48:16.909386  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:48:16.924416  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:48:16.924438  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:48:17.001976  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:48:16.991502   21073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:48:16.992681   21073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:48:16.993581   21073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:48:16.995339   21073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:48:16.995963   21073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:48:16.991502   21073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:48:16.992681   21073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:48:16.993581   21073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:48:16.995339   21073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:48:16.995963   21073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:48:17.001987  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:48:17.001997  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:48:17.083059  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:48:17.083078  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 10:48:17.113855  947325 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001096364s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1213 10:48:17.113886  947325 out.go:285] * 
	W1213 10:48:17.113944  947325 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001096364s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 10:48:17.113961  947325 out.go:285] * 
	W1213 10:48:17.116079  947325 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 10:48:17.121140  947325 out.go:203] 
	W1213 10:48:17.123914  947325 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001096364s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 10:48:17.123972  947325 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1213 10:48:17.123993  947325 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1213 10:48:17.128861  947325 out.go:203] 
	
	
	==> CRI-O <==
	Dec 13 10:36:08 functional-200955 crio[9915]: time="2025-12-13T10:36:08.290540792Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 13 10:36:08 functional-200955 crio[9915]: time="2025-12-13T10:36:08.290575401Z" level=info msg="Starting seccomp notifier watcher"
	Dec 13 10:36:08 functional-200955 crio[9915]: time="2025-12-13T10:36:08.290616288Z" level=info msg="Create NRI interface"
	Dec 13 10:36:08 functional-200955 crio[9915]: time="2025-12-13T10:36:08.291085281Z" level=info msg="built-in NRI default validator is disabled"
	Dec 13 10:36:08 functional-200955 crio[9915]: time="2025-12-13T10:36:08.291114401Z" level=info msg="runtime interface created"
	Dec 13 10:36:08 functional-200955 crio[9915]: time="2025-12-13T10:36:08.291129622Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 13 10:36:08 functional-200955 crio[9915]: time="2025-12-13T10:36:08.291142299Z" level=info msg="runtime interface starting up..."
	Dec 13 10:36:08 functional-200955 crio[9915]: time="2025-12-13T10:36:08.291148937Z" level=info msg="starting plugins..."
	Dec 13 10:36:08 functional-200955 crio[9915]: time="2025-12-13T10:36:08.291165938Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 13 10:36:08 functional-200955 crio[9915]: time="2025-12-13T10:36:08.291236782Z" level=info msg="No systemd watchdog enabled"
	Dec 13 10:36:08 functional-200955 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 13 10:40:13 functional-200955 crio[9915]: time="2025-12-13T10:40:13.084834397Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=b5efff79-46eb-41f2-bde4-db3ba9dab38c name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:40:13 functional-200955 crio[9915]: time="2025-12-13T10:40:13.08566844Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=5615dd29-1801-45cf-b9ec-bc2670925ce8 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:40:13 functional-200955 crio[9915]: time="2025-12-13T10:40:13.086277701Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=27142b95-3cc3-4adb-a2df-9868044a9998 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:40:13 functional-200955 crio[9915]: time="2025-12-13T10:40:13.086727642Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=22595c5f-3db5-4062-8e04-cb17f6bc794b name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:40:13 functional-200955 crio[9915]: time="2025-12-13T10:40:13.087217057Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=4772ea3e-d27c-4029-bb8e-c23e148a40e4 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:40:13 functional-200955 crio[9915]: time="2025-12-13T10:40:13.08768738Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=9219a2d6-ec51-448e-87c0-444e5d98b53a name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:40:13 functional-200955 crio[9915]: time="2025-12-13T10:40:13.088157391Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=a67c2fca-67f5-45c5-89da-71309b05610c name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:44:15 functional-200955 crio[9915]: time="2025-12-13T10:44:15.534115746Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=3049b4b3-14f8-431e-ab4d-c6efa4a37dac name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:44:15 functional-200955 crio[9915]: time="2025-12-13T10:44:15.535316398Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=aca0cbac-b5e4-4959-a768-b532f9c78063 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:44:15 functional-200955 crio[9915]: time="2025-12-13T10:44:15.53607634Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=4f458fd4-6b17-4cc2-8b0b-32f7a700d5d6 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:44:15 functional-200955 crio[9915]: time="2025-12-13T10:44:15.536719579Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=93897364-2c92-4299-ac1e-dfb20638840a name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:44:15 functional-200955 crio[9915]: time="2025-12-13T10:44:15.538084483Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=80c3abea-faad-48a9-8be1-ff63680847aa name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:44:15 functional-200955 crio[9915]: time="2025-12-13T10:44:15.538942002Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=9f779e3b-3069-476b-9013-f486002774b8 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:44:15 functional-200955 crio[9915]: time="2025-12-13T10:44:15.539437793Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=de504892-ee6f-46b3-8ac6-2712427d6188 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:48:18.340020   21195 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:48:18.340641   21195 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:48:18.342107   21195 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:48:18.343311   21195 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:48:18.344768   21195 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec13 10:10] kauditd_printk_skb: 8 callbacks suppressed
	[Dec13 10:11] overlayfs: idmapped layers are currently not supported
	[  +0.076161] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec13 10:17] overlayfs: idmapped layers are currently not supported
	[Dec13 10:18] overlayfs: idmapped layers are currently not supported
	[Dec13 10:35] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 10:48:18 up  5:30,  0 user,  load average: 0.33, 0.22, 0.52
	Linux functional-200955 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 10:48:15 functional-200955 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:48:16 functional-200955 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 959.
	Dec 13 10:48:16 functional-200955 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:48:16 functional-200955 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:48:16 functional-200955 kubelet[21007]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 10:48:16 functional-200955 kubelet[21007]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 10:48:16 functional-200955 kubelet[21007]: E1213 10:48:16.275109   21007 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:48:16 functional-200955 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:48:16 functional-200955 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:48:16 functional-200955 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 960.
	Dec 13 10:48:16 functional-200955 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:48:16 functional-200955 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:48:17 functional-200955 kubelet[21078]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 10:48:17 functional-200955 kubelet[21078]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 10:48:17 functional-200955 kubelet[21078]: E1213 10:48:17.036151   21078 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:48:17 functional-200955 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:48:17 functional-200955 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:48:17 functional-200955 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 961.
	Dec 13 10:48:17 functional-200955 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:48:17 functional-200955 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:48:17 functional-200955 kubelet[21111]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 10:48:17 functional-200955 kubelet[21111]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 10:48:17 functional-200955 kubelet[21111]: E1213 10:48:17.794782   21111 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:48:17 functional-200955 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:48:17 functional-200955 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-200955 -n functional-200955
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-200955 -n functional-200955: exit status 2 (357.408897ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-200955" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (734.29s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (2.28s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-200955 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: (dbg) Non-zero exit: kubectl --context functional-200955 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (68.761228ms)

                                                
                                                
-- stdout --
	{
	    "apiVersion": "v1",
	    "items": [],
	    "kind": "List",
	    "metadata": {
	        "resourceVersion": ""
	    }
	}

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:827: failed to get components. args "kubectl --context functional-200955 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-200955
helpers_test.go:244: (dbg) docker inspect functional-200955:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8d53cd00da87a9082532fc43ffe108cc46c100c4d52d598de1abc5c03d05f0a2",
	        "Created": "2025-12-13T10:21:24.063231347Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 935996,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T10:21:24.120776444Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/8d53cd00da87a9082532fc43ffe108cc46c100c4d52d598de1abc5c03d05f0a2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8d53cd00da87a9082532fc43ffe108cc46c100c4d52d598de1abc5c03d05f0a2/hostname",
	        "HostsPath": "/var/lib/docker/containers/8d53cd00da87a9082532fc43ffe108cc46c100c4d52d598de1abc5c03d05f0a2/hosts",
	        "LogPath": "/var/lib/docker/containers/8d53cd00da87a9082532fc43ffe108cc46c100c4d52d598de1abc5c03d05f0a2/8d53cd00da87a9082532fc43ffe108cc46c100c4d52d598de1abc5c03d05f0a2-json.log",
	        "Name": "/functional-200955",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-200955:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-200955",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8d53cd00da87a9082532fc43ffe108cc46c100c4d52d598de1abc5c03d05f0a2",
	                "LowerDir": "/var/lib/docker/overlay2/88eeb10fcc1b499e31bc1f6077feaba1c1c5b1a510066e3c5f3c7fab1032a7a8-init/diff:/var/lib/docker/overlay2/ae644fe0cc2841f5eea1cee1fab5fa62406b5368ff2c4f1e7ca42815e94a37ad/diff",
	                "MergedDir": "/var/lib/docker/overlay2/88eeb10fcc1b499e31bc1f6077feaba1c1c5b1a510066e3c5f3c7fab1032a7a8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/88eeb10fcc1b499e31bc1f6077feaba1c1c5b1a510066e3c5f3c7fab1032a7a8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/88eeb10fcc1b499e31bc1f6077feaba1c1c5b1a510066e3c5f3c7fab1032a7a8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-200955",
	                "Source": "/var/lib/docker/volumes/functional-200955/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-200955",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-200955",
	                "name.minikube.sigs.k8s.io": "functional-200955",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "766cddaf684c9eda3444b59c94594c94772112ec8d9beb3bf9ab0dee27a031f7",
	            "SandboxKey": "/var/run/docker/netns/766cddaf684c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33523"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33524"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33527"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33525"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33526"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-200955": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "26:41:8f:b5:13:ba",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "cc1684d1fcbfd40cf35af7d1687322fe1e1f6c4d0d51bbc510daab317bab57d4",
	                    "EndpointID": "480d7cd674d03dbe8a8b029c866cc993844939c5b39aa63c9b0d9188a61c29a3",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-200955",
	                        "8d53cd00da87"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-200955 -n functional-200955
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-200955 -n functional-200955: exit status 2 (301.414791ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-200955 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                       ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-769798 image build -t localhost/my-image:functional-769798 testdata/build --alsologtostderr                                            │ functional-769798 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │ 13 Dec 25 10:21 UTC │
	│ image          │ functional-769798 image ls --format table --alsologtostderr                                                                                       │ functional-769798 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │ 13 Dec 25 10:21 UTC │
	│ update-context │ functional-769798 update-context --alsologtostderr -v=2                                                                                           │ functional-769798 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │ 13 Dec 25 10:21 UTC │
	│ update-context │ functional-769798 update-context --alsologtostderr -v=2                                                                                           │ functional-769798 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │ 13 Dec 25 10:21 UTC │
	│ update-context │ functional-769798 update-context --alsologtostderr -v=2                                                                                           │ functional-769798 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │ 13 Dec 25 10:21 UTC │
	│ image          │ functional-769798 image ls                                                                                                                        │ functional-769798 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │ 13 Dec 25 10:21 UTC │
	│ delete         │ -p functional-769798                                                                                                                              │ functional-769798 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │ 13 Dec 25 10:21 UTC │
	│ start          │ -p functional-200955 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │                     │
	│ start          │ -p functional-200955 --alsologtostderr -v=8                                                                                                       │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:29 UTC │                     │
	│ cache          │ functional-200955 cache add registry.k8s.io/pause:3.1                                                                                             │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:35 UTC │ 13 Dec 25 10:35 UTC │
	│ cache          │ functional-200955 cache add registry.k8s.io/pause:3.3                                                                                             │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:35 UTC │ 13 Dec 25 10:35 UTC │
	│ cache          │ functional-200955 cache add registry.k8s.io/pause:latest                                                                                          │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:35 UTC │ 13 Dec 25 10:35 UTC │
	│ cache          │ functional-200955 cache add minikube-local-cache-test:functional-200955                                                                           │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:35 UTC │ 13 Dec 25 10:35 UTC │
	│ cache          │ functional-200955 cache delete minikube-local-cache-test:functional-200955                                                                        │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:35 UTC │ 13 Dec 25 10:35 UTC │
	│ cache          │ delete registry.k8s.io/pause:3.3                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 10:35 UTC │ 13 Dec 25 10:35 UTC │
	│ cache          │ list                                                                                                                                              │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 10:35 UTC │ 13 Dec 25 10:35 UTC │
	│ ssh            │ functional-200955 ssh sudo crictl images                                                                                                          │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:35 UTC │ 13 Dec 25 10:35 UTC │
	│ ssh            │ functional-200955 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:35 UTC │ 13 Dec 25 10:35 UTC │
	│ ssh            │ functional-200955 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:35 UTC │                     │
	│ cache          │ functional-200955 cache reload                                                                                                                    │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:35 UTC │ 13 Dec 25 10:35 UTC │
	│ ssh            │ functional-200955 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:35 UTC │ 13 Dec 25 10:35 UTC │
	│ cache          │ delete registry.k8s.io/pause:3.1                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 10:35 UTC │ 13 Dec 25 10:35 UTC │
	│ cache          │ delete registry.k8s.io/pause:latest                                                                                                               │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 10:35 UTC │ 13 Dec 25 10:35 UTC │
	│ kubectl        │ functional-200955 kubectl -- --context functional-200955 get pods                                                                                 │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:35 UTC │                     │
	│ start          │ -p functional-200955 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                          │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:36 UTC │                     │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 10:36:05
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 10:36:05.024663  947325 out.go:360] Setting OutFile to fd 1 ...
	I1213 10:36:05.024857  947325 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:36:05.024862  947325 out.go:374] Setting ErrFile to fd 2...
	I1213 10:36:05.024867  947325 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:36:05.025148  947325 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-904040/.minikube/bin
	I1213 10:36:05.025578  947325 out.go:368] Setting JSON to false
	I1213 10:36:05.026512  947325 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":19114,"bootTime":1765603051,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1213 10:36:05.026573  947325 start.go:143] virtualization:  
	I1213 10:36:05.030119  947325 out.go:179] * [functional-200955] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 10:36:05.033180  947325 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 10:36:05.033273  947325 notify.go:221] Checking for updates...
	I1213 10:36:05.036966  947325 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 10:36:05.041647  947325 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22128-904040/kubeconfig
	I1213 10:36:05.044535  947325 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-904040/.minikube
	I1213 10:36:05.047483  947325 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 10:36:05.050413  947325 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 10:36:05.053885  947325 config.go:182] Loaded profile config "functional-200955": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 10:36:05.053982  947325 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 10:36:05.081037  947325 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 10:36:05.081166  947325 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:36:05.151201  947325 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-13 10:36:05.14075062 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:36:05.151307  947325 docker.go:319] overlay module found
	I1213 10:36:05.154359  947325 out.go:179] * Using the docker driver based on existing profile
	I1213 10:36:05.157187  947325 start.go:309] selected driver: docker
	I1213 10:36:05.157194  947325 start.go:927] validating driver "docker" against &{Name:functional-200955 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-200955 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:36:05.157283  947325 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 10:36:05.157388  947325 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:36:05.214971  947325 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-13 10:36:05.204866403 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:36:05.215380  947325 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 10:36:05.215410  947325 cni.go:84] Creating CNI manager for ""
	I1213 10:36:05.215457  947325 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 10:36:05.215500  947325 start.go:353] cluster config:
	{Name:functional-200955 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-200955 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:36:05.218694  947325 out.go:179] * Starting "functional-200955" primary control-plane node in "functional-200955" cluster
	I1213 10:36:05.221699  947325 cache.go:134] Beginning downloading kic base image for docker with crio
	I1213 10:36:05.224563  947325 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 10:36:05.227409  947325 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 10:36:05.227448  947325 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22128-904040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1213 10:36:05.227455  947325 cache.go:65] Caching tarball of preloaded images
	I1213 10:36:05.227491  947325 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 10:36:05.227538  947325 preload.go:238] Found /home/jenkins/minikube-integration/22128-904040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1213 10:36:05.227551  947325 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1213 10:36:05.227666  947325 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/config.json ...
	I1213 10:36:05.247494  947325 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 10:36:05.247505  947325 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 10:36:05.247518  947325 cache.go:243] Successfully downloaded all kic artifacts
	I1213 10:36:05.247549  947325 start.go:360] acquireMachinesLock for functional-200955: {Name:mkc5e96275d9db4dc69c44a1e3c60b6575a1e73a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 10:36:05.247604  947325 start.go:364] duration metric: took 37.317µs to acquireMachinesLock for "functional-200955"
	I1213 10:36:05.247623  947325 start.go:96] Skipping create...Using existing machine configuration
	I1213 10:36:05.247627  947325 fix.go:54] fixHost starting: 
	I1213 10:36:05.247894  947325 cli_runner.go:164] Run: docker container inspect functional-200955 --format={{.State.Status}}
	I1213 10:36:05.265053  947325 fix.go:112] recreateIfNeeded on functional-200955: state=Running err=<nil>
	W1213 10:36:05.265102  947325 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 10:36:05.268458  947325 out.go:252] * Updating the running docker "functional-200955" container ...
	I1213 10:36:05.268485  947325 machine.go:94] provisionDockerMachine start ...
	I1213 10:36:05.268569  947325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-200955
	I1213 10:36:05.285699  947325 main.go:143] libmachine: Using SSH client type: native
	I1213 10:36:05.286021  947325 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33523 <nil> <nil>}
	I1213 10:36:05.286027  947325 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 10:36:05.433614  947325 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-200955
	
	I1213 10:36:05.433628  947325 ubuntu.go:182] provisioning hostname "functional-200955"
	I1213 10:36:05.433698  947325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-200955
	I1213 10:36:05.452166  947325 main.go:143] libmachine: Using SSH client type: native
	I1213 10:36:05.452470  947325 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33523 <nil> <nil>}
	I1213 10:36:05.452478  947325 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-200955 && echo "functional-200955" | sudo tee /etc/hostname
	I1213 10:36:05.611951  947325 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-200955
	
	I1213 10:36:05.612044  947325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-200955
	I1213 10:36:05.630892  947325 main.go:143] libmachine: Using SSH client type: native
	I1213 10:36:05.631191  947325 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33523 <nil> <nil>}
	I1213 10:36:05.631205  947325 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-200955' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-200955/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-200955' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 10:36:05.782771  947325 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 10:36:05.782787  947325 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22128-904040/.minikube CaCertPath:/home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22128-904040/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22128-904040/.minikube}
	I1213 10:36:05.782810  947325 ubuntu.go:190] setting up certificates
	I1213 10:36:05.782824  947325 provision.go:84] configureAuth start
	I1213 10:36:05.782884  947325 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-200955
	I1213 10:36:05.800513  947325 provision.go:143] copyHostCerts
	I1213 10:36:05.800580  947325 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-904040/.minikube/ca.pem, removing ...
	I1213 10:36:05.800588  947325 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-904040/.minikube/ca.pem
	I1213 10:36:05.800662  947325 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22128-904040/.minikube/ca.pem (1082 bytes)
	I1213 10:36:05.800773  947325 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-904040/.minikube/cert.pem, removing ...
	I1213 10:36:05.800777  947325 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-904040/.minikube/cert.pem
	I1213 10:36:05.800802  947325 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22128-904040/.minikube/cert.pem (1123 bytes)
	I1213 10:36:05.800861  947325 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-904040/.minikube/key.pem, removing ...
	I1213 10:36:05.800865  947325 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-904040/.minikube/key.pem
	I1213 10:36:05.800887  947325 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22128-904040/.minikube/key.pem (1675 bytes)
	I1213 10:36:05.800938  947325 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22128-904040/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca-key.pem org=jenkins.functional-200955 san=[127.0.0.1 192.168.49.2 functional-200955 localhost minikube]
	I1213 10:36:06.162765  947325 provision.go:177] copyRemoteCerts
	I1213 10:36:06.162821  947325 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 10:36:06.162864  947325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-200955
	I1213 10:36:06.179964  947325 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33523 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/functional-200955/id_rsa Username:docker}
	I1213 10:36:06.285273  947325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 10:36:06.303138  947325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 10:36:06.321000  947325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 10:36:06.339159  947325 provision.go:87] duration metric: took 556.311814ms to configureAuth
	I1213 10:36:06.339177  947325 ubuntu.go:206] setting minikube options for container-runtime
	I1213 10:36:06.339382  947325 config.go:182] Loaded profile config "functional-200955": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 10:36:06.339492  947325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-200955
	I1213 10:36:06.357323  947325 main.go:143] libmachine: Using SSH client type: native
	I1213 10:36:06.357649  947325 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33523 <nil> <nil>}
	I1213 10:36:06.357662  947325 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 10:36:06.705283  947325 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 10:36:06.705297  947325 machine.go:97] duration metric: took 1.436804594s to provisionDockerMachine
	I1213 10:36:06.705307  947325 start.go:293] postStartSetup for "functional-200955" (driver="docker")
	I1213 10:36:06.705318  947325 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 10:36:06.705379  947325 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 10:36:06.705435  947325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-200955
	I1213 10:36:06.722886  947325 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33523 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/functional-200955/id_rsa Username:docker}
	I1213 10:36:06.829449  947325 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 10:36:06.832816  947325 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 10:36:06.832847  947325 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 10:36:06.832858  947325 filesync.go:126] Scanning /home/jenkins/minikube-integration/22128-904040/.minikube/addons for local assets ...
	I1213 10:36:06.832914  947325 filesync.go:126] Scanning /home/jenkins/minikube-integration/22128-904040/.minikube/files for local assets ...
	I1213 10:36:06.832996  947325 filesync.go:149] local asset: /home/jenkins/minikube-integration/22128-904040/.minikube/files/etc/ssl/certs/9074842.pem -> 9074842.pem in /etc/ssl/certs
	I1213 10:36:06.833088  947325 filesync.go:149] local asset: /home/jenkins/minikube-integration/22128-904040/.minikube/files/etc/test/nested/copy/907484/hosts -> hosts in /etc/test/nested/copy/907484
	I1213 10:36:06.833134  947325 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/907484
	I1213 10:36:06.840686  947325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/files/etc/ssl/certs/9074842.pem --> /etc/ssl/certs/9074842.pem (1708 bytes)
	I1213 10:36:06.859025  947325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/files/etc/test/nested/copy/907484/hosts --> /etc/test/nested/copy/907484/hosts (40 bytes)
	I1213 10:36:06.877717  947325 start.go:296] duration metric: took 172.395592ms for postStartSetup
	I1213 10:36:06.877814  947325 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 10:36:06.877857  947325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-200955
	I1213 10:36:06.896880  947325 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33523 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/functional-200955/id_rsa Username:docker}
	I1213 10:36:06.998897  947325 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 10:36:07.005668  947325 fix.go:56] duration metric: took 1.758032508s for fixHost
	I1213 10:36:07.005685  947325 start.go:83] releasing machines lock for "functional-200955", held for 1.758074248s
	I1213 10:36:07.005790  947325 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-200955
	I1213 10:36:07.024345  947325 ssh_runner.go:195] Run: cat /version.json
	I1213 10:36:07.024397  947325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-200955
	I1213 10:36:07.024410  947325 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 10:36:07.024473  947325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-200955
	I1213 10:36:07.045627  947325 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33523 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/functional-200955/id_rsa Username:docker}
	I1213 10:36:07.056017  947325 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33523 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/functional-200955/id_rsa Username:docker}
	I1213 10:36:07.235962  947325 ssh_runner.go:195] Run: systemctl --version
	I1213 10:36:07.243338  947325 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 10:36:07.293399  947325 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 10:36:07.297828  947325 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 10:36:07.297890  947325 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 10:36:07.305998  947325 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 10:36:07.306012  947325 start.go:496] detecting cgroup driver to use...
	I1213 10:36:07.306043  947325 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 10:36:07.306089  947325 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 10:36:07.321360  947325 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 10:36:07.334818  947325 docker.go:218] disabling cri-docker service (if available) ...
	I1213 10:36:07.334873  947325 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 10:36:07.350268  947325 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 10:36:07.363266  947325 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 10:36:07.482802  947325 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 10:36:07.601250  947325 docker.go:234] disabling docker service ...
	I1213 10:36:07.601314  947325 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 10:36:07.616649  947325 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 10:36:07.630193  947325 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 10:36:07.750803  947325 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 10:36:07.872755  947325 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 10:36:07.885775  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 10:36:07.901044  947325 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 10:36:07.901118  947325 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:36:07.910913  947325 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 10:36:07.910999  947325 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:36:07.920242  947325 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:36:07.929183  947325 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:36:07.938207  947325 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 10:36:07.946601  947325 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:36:07.956231  947325 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:36:07.964904  947325 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:36:07.974470  947325 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 10:36:07.983694  947325 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 10:36:07.992492  947325 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:36:08.121808  947325 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 10:36:08.297420  947325 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 10:36:08.297489  947325 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 10:36:08.301247  947325 start.go:564] Will wait 60s for crictl version
	I1213 10:36:08.301305  947325 ssh_runner.go:195] Run: which crictl
	I1213 10:36:08.304718  947325 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 10:36:08.329152  947325 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1213 10:36:08.329228  947325 ssh_runner.go:195] Run: crio --version
	I1213 10:36:08.358630  947325 ssh_runner.go:195] Run: crio --version
	I1213 10:36:08.393160  947325 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1213 10:36:08.396025  947325 cli_runner.go:164] Run: docker network inspect functional-200955 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 10:36:08.412435  947325 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1213 10:36:08.419349  947325 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1213 10:36:08.422234  947325 kubeadm.go:884] updating cluster {Name:functional-200955 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-200955 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 10:36:08.422367  947325 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 10:36:08.422431  947325 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 10:36:08.457237  947325 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 10:36:08.457249  947325 crio.go:433] Images already preloaded, skipping extraction
	I1213 10:36:08.457306  947325 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 10:36:08.483246  947325 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 10:36:08.483258  947325 cache_images.go:86] Images are preloaded, skipping loading
	I1213 10:36:08.483264  947325 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1213 10:36:08.483360  947325 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-200955 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-200955 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 10:36:08.483446  947325 ssh_runner.go:195] Run: crio config
	I1213 10:36:08.545147  947325 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1213 10:36:08.545173  947325 cni.go:84] Creating CNI manager for ""
	I1213 10:36:08.545183  947325 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 10:36:08.545197  947325 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 10:36:08.545221  947325 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-200955 NodeName:functional-200955 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfig
Opts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 10:36:08.545347  947325 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-200955"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 10:36:08.545423  947325 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 10:36:08.553515  947325 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 10:36:08.553607  947325 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 10:36:08.561293  947325 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1213 10:36:08.574385  947325 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 10:36:08.587429  947325 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2071 bytes)
	I1213 10:36:08.600337  947325 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1213 10:36:08.603994  947325 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:36:08.714374  947325 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 10:36:08.729978  947325 certs.go:69] Setting up /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955 for IP: 192.168.49.2
	I1213 10:36:08.729989  947325 certs.go:195] generating shared ca certs ...
	I1213 10:36:08.730004  947325 certs.go:227] acquiring lock for ca certs: {Name:mk8a4f8a0a31c02fdf751ce601bdbbea6f5a03e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:36:08.730137  947325 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22128-904040/.minikube/ca.key
	I1213 10:36:08.730179  947325 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22128-904040/.minikube/proxy-client-ca.key
	I1213 10:36:08.730184  947325 certs.go:257] generating profile certs ...
	I1213 10:36:08.730263  947325 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/client.key
	I1213 10:36:08.730310  947325 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/apiserver.key.8da389ed
	I1213 10:36:08.730347  947325 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/proxy-client.key
	I1213 10:36:08.730463  947325 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/907484.pem (1338 bytes)
	W1213 10:36:08.730496  947325 certs.go:480] ignoring /home/jenkins/minikube-integration/22128-904040/.minikube/certs/907484_empty.pem, impossibly tiny 0 bytes
	I1213 10:36:08.730503  947325 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 10:36:08.730557  947325 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca.pem (1082 bytes)
	I1213 10:36:08.730581  947325 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/cert.pem (1123 bytes)
	I1213 10:36:08.730604  947325 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/key.pem (1675 bytes)
	I1213 10:36:08.730645  947325 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-904040/.minikube/files/etc/ssl/certs/9074842.pem (1708 bytes)
	I1213 10:36:08.731237  947325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 10:36:08.752034  947325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 10:36:08.773437  947325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 10:36:08.794430  947325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 10:36:08.812223  947325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 10:36:08.829741  947325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 10:36:08.846903  947325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 10:36:08.865036  947325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 10:36:08.883435  947325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 10:36:08.901321  947325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/certs/907484.pem --> /usr/share/ca-certificates/907484.pem (1338 bytes)
	I1213 10:36:08.919555  947325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/files/etc/ssl/certs/9074842.pem --> /usr/share/ca-certificates/9074842.pem (1708 bytes)
	I1213 10:36:08.937123  947325 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 10:36:08.950079  947325 ssh_runner.go:195] Run: openssl version
	I1213 10:36:08.956456  947325 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9074842.pem
	I1213 10:36:08.964062  947325 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9074842.pem /etc/ssl/certs/9074842.pem
	I1213 10:36:08.971445  947325 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9074842.pem
	I1213 10:36:08.975220  947325 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 10:21 /usr/share/ca-certificates/9074842.pem
	I1213 10:36:08.975278  947325 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9074842.pem
	I1213 10:36:09.016546  947325 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 10:36:09.024284  947325 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:36:09.031776  947325 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 10:36:09.039308  947325 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:36:09.042991  947325 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 10:11 /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:36:09.043047  947325 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:36:09.084141  947325 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 10:36:09.091531  947325 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/907484.pem
	I1213 10:36:09.098770  947325 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/907484.pem /etc/ssl/certs/907484.pem
	I1213 10:36:09.106212  947325 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/907484.pem
	I1213 10:36:09.109989  947325 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 10:21 /usr/share/ca-certificates/907484.pem
	I1213 10:36:09.110044  947325 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/907484.pem
	I1213 10:36:09.153254  947325 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 10:36:09.160715  947325 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 10:36:09.164506  947325 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 10:36:09.205710  947325 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 10:36:09.247436  947325 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 10:36:09.288348  947325 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 10:36:09.331611  947325 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 10:36:09.374582  947325 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 10:36:09.417486  947325 kubeadm.go:401] StartCluster: {Name:functional-200955 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-200955 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:36:09.417589  947325 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 10:36:09.417682  947325 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 10:36:09.449632  947325 cri.go:89] found id: ""
	I1213 10:36:09.449706  947325 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 10:36:09.457511  947325 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 10:36:09.457521  947325 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 10:36:09.457596  947325 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 10:36:09.465280  947325 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 10:36:09.465840  947325 kubeconfig.go:125] found "functional-200955" server: "https://192.168.49.2:8441"
	I1213 10:36:09.467296  947325 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 10:36:09.475528  947325 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-13 10:21:33.398300096 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-13 10:36:08.597035311 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1213 10:36:09.475546  947325 kubeadm.go:1161] stopping kube-system containers ...
	I1213 10:36:09.475557  947325 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1213 10:36:09.475616  947325 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 10:36:09.507924  947325 cri.go:89] found id: ""
	I1213 10:36:09.508000  947325 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1213 10:36:09.528470  947325 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 10:36:09.536474  947325 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Dec 13 10:25 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5640 Dec 13 10:25 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec 13 10:25 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec 13 10:25 /etc/kubernetes/scheduler.conf
	
	I1213 10:36:09.536539  947325 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1213 10:36:09.544588  947325 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1213 10:36:09.552476  947325 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1213 10:36:09.552532  947325 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 10:36:09.560285  947325 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1213 10:36:09.567834  947325 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1213 10:36:09.567887  947325 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 10:36:09.575592  947325 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1213 10:36:09.583902  947325 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1213 10:36:09.583961  947325 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 10:36:09.591566  947325 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 10:36:09.599534  947325 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 10:36:09.647986  947325 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 10:36:11.096705  947325 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.44869318s)
	I1213 10:36:11.096768  947325 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1213 10:36:11.325396  947325 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 10:36:11.390971  947325 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1213 10:36:11.438539  947325 api_server.go:52] waiting for apiserver process to appear ...
	I1213 10:36:11.438613  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:11.939787  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:12.439662  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:12.939059  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:13.439009  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:13.939132  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:14.438804  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:14.939015  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:15.439388  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:15.939371  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:16.439364  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:16.939242  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:17.438810  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:17.938842  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:18.439574  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:18.939403  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:19.438808  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:19.938978  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:20.438838  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:20.938801  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:21.439702  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:21.938786  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:22.438810  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:22.938804  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:23.438760  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:23.939498  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:24.438837  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:24.939362  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:25.439492  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:25.939539  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:26.439316  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:26.939385  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:27.438813  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:27.938714  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:28.439704  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:28.938715  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:29.438702  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:29.938870  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:30.439316  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:30.939369  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:31.438789  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:31.938728  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:32.439400  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:32.938824  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:33.438805  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:33.938821  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:34.439650  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:34.939567  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:35.439266  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:35.938806  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:36.439740  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:36.938910  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:37.439180  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:37.939269  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:38.439067  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:38.938814  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:39.438987  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:39.939068  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:40.439485  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:40.939755  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:41.439569  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:41.939359  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:42.438799  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:42.939523  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:43.438794  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:43.939463  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:44.439348  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:44.938832  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:45.439512  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:45.939368  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:46.439415  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:46.938802  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:47.439348  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:47.938774  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:48.439499  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:48.938765  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:49.438815  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:49.939489  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:50.439425  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:50.938961  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:51.438899  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:51.938980  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:52.438768  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:52.939488  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:53.438802  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:53.938784  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:54.439567  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:54.939001  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:55.439034  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:55.939017  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:56.438830  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:56.938984  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:57.438956  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:57.939727  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:58.439422  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:58.938982  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:59.438715  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:59.938800  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:00.439480  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:00.939557  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:01.439633  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:01.938752  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:02.438782  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:02.939460  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:03.439509  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:03.939666  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:04.438756  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:04.938745  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:05.438791  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:05.938960  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:06.439641  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:06.939760  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:07.438893  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:07.939464  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:08.438808  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:08.938829  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:09.438860  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:09.939094  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:10.439786  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:10.939776  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:11.439694  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:37:11.439774  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:37:11.465379  947325 cri.go:89] found id: ""
	I1213 10:37:11.465394  947325 logs.go:282] 0 containers: []
	W1213 10:37:11.465401  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:37:11.465406  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:37:11.465463  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:37:11.498722  947325 cri.go:89] found id: ""
	I1213 10:37:11.498736  947325 logs.go:282] 0 containers: []
	W1213 10:37:11.498744  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:37:11.498749  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:37:11.498808  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:37:11.528435  947325 cri.go:89] found id: ""
	I1213 10:37:11.528450  947325 logs.go:282] 0 containers: []
	W1213 10:37:11.528456  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:37:11.528461  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:37:11.528520  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:37:11.556412  947325 cri.go:89] found id: ""
	I1213 10:37:11.556428  947325 logs.go:282] 0 containers: []
	W1213 10:37:11.556435  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:37:11.556439  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:37:11.556495  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:37:11.582029  947325 cri.go:89] found id: ""
	I1213 10:37:11.582043  947325 logs.go:282] 0 containers: []
	W1213 10:37:11.582050  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:37:11.582055  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:37:11.582111  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:37:11.606900  947325 cri.go:89] found id: ""
	I1213 10:37:11.606914  947325 logs.go:282] 0 containers: []
	W1213 10:37:11.606921  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:37:11.606926  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:37:11.606995  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:37:11.631834  947325 cri.go:89] found id: ""
	I1213 10:37:11.631848  947325 logs.go:282] 0 containers: []
	W1213 10:37:11.631855  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:37:11.631863  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:37:11.631873  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:37:11.696990  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:37:11.697011  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:37:11.711905  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:37:11.711923  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:37:11.780498  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:37:11.772620   10987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:11.773404   10987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:11.774929   10987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:11.775464   10987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:11.776559   10987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:37:11.772620   10987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:11.773404   10987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:11.774929   10987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:11.775464   10987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:11.776559   10987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:37:11.780514  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:37:11.780525  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:37:11.849149  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:37:11.849169  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:37:14.380275  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:14.390300  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:37:14.390376  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:37:14.422358  947325 cri.go:89] found id: ""
	I1213 10:37:14.422408  947325 logs.go:282] 0 containers: []
	W1213 10:37:14.422434  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:37:14.422439  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:37:14.422577  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:37:14.449364  947325 cri.go:89] found id: ""
	I1213 10:37:14.449379  947325 logs.go:282] 0 containers: []
	W1213 10:37:14.449386  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:37:14.449391  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:37:14.449448  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:37:14.478529  947325 cri.go:89] found id: ""
	I1213 10:37:14.478543  947325 logs.go:282] 0 containers: []
	W1213 10:37:14.478550  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:37:14.478555  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:37:14.478612  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:37:14.521652  947325 cri.go:89] found id: ""
	I1213 10:37:14.521666  947325 logs.go:282] 0 containers: []
	W1213 10:37:14.521673  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:37:14.521678  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:37:14.521736  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:37:14.558521  947325 cri.go:89] found id: ""
	I1213 10:37:14.558535  947325 logs.go:282] 0 containers: []
	W1213 10:37:14.558542  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:37:14.558547  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:37:14.558605  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:37:14.582435  947325 cri.go:89] found id: ""
	I1213 10:37:14.582448  947325 logs.go:282] 0 containers: []
	W1213 10:37:14.582455  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:37:14.582461  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:37:14.582518  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:37:14.607776  947325 cri.go:89] found id: ""
	I1213 10:37:14.607791  947325 logs.go:282] 0 containers: []
	W1213 10:37:14.607799  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:37:14.607807  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:37:14.607816  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:37:14.673008  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:37:14.673028  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:37:14.688569  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:37:14.688585  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:37:14.753510  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:37:14.744939   11090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:14.745653   11090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:14.747326   11090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:14.747936   11090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:14.749524   11090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:37:14.744939   11090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:14.745653   11090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:14.747326   11090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:14.747936   11090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:14.749524   11090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:37:14.753524  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:37:14.753556  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:37:14.820848  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:37:14.820868  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:37:17.353563  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:17.363824  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:37:17.363887  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:37:17.393249  947325 cri.go:89] found id: ""
	I1213 10:37:17.393263  947325 logs.go:282] 0 containers: []
	W1213 10:37:17.393271  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:37:17.393275  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:37:17.393334  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:37:17.421143  947325 cri.go:89] found id: ""
	I1213 10:37:17.421157  947325 logs.go:282] 0 containers: []
	W1213 10:37:17.421164  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:37:17.421169  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:37:17.421226  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:37:17.445347  947325 cri.go:89] found id: ""
	I1213 10:37:17.445361  947325 logs.go:282] 0 containers: []
	W1213 10:37:17.445368  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:37:17.445372  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:37:17.445428  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:37:17.474380  947325 cri.go:89] found id: ""
	I1213 10:37:17.474406  947325 logs.go:282] 0 containers: []
	W1213 10:37:17.474413  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:37:17.474419  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:37:17.474502  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:37:17.510146  947325 cri.go:89] found id: ""
	I1213 10:37:17.510160  947325 logs.go:282] 0 containers: []
	W1213 10:37:17.510167  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:37:17.510172  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:37:17.510228  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:37:17.544872  947325 cri.go:89] found id: ""
	I1213 10:37:17.544897  947325 logs.go:282] 0 containers: []
	W1213 10:37:17.544911  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:37:17.544917  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:37:17.544987  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:37:17.570527  947325 cri.go:89] found id: ""
	I1213 10:37:17.570542  947325 logs.go:282] 0 containers: []
	W1213 10:37:17.570549  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:37:17.570556  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:37:17.570567  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:37:17.634904  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:37:17.634924  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:37:17.649198  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:37:17.649216  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:37:17.710891  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:37:17.702777   11196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:17.703254   11196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:17.704864   11196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:17.705195   11196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:17.706621   11196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:37:17.702777   11196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:17.703254   11196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:17.704864   11196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:17.705195   11196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:17.706621   11196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:37:17.710910  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:37:17.710921  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:37:17.779540  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:37:17.779561  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:37:20.315323  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:20.326110  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:37:20.326185  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:37:20.351285  947325 cri.go:89] found id: ""
	I1213 10:37:20.351299  947325 logs.go:282] 0 containers: []
	W1213 10:37:20.351307  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:37:20.351312  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:37:20.351381  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:37:20.377322  947325 cri.go:89] found id: ""
	I1213 10:37:20.377335  947325 logs.go:282] 0 containers: []
	W1213 10:37:20.377343  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:37:20.377352  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:37:20.377413  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:37:20.403676  947325 cri.go:89] found id: ""
	I1213 10:37:20.403691  947325 logs.go:282] 0 containers: []
	W1213 10:37:20.403698  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:37:20.403704  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:37:20.403766  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:37:20.433713  947325 cri.go:89] found id: ""
	I1213 10:37:20.433736  947325 logs.go:282] 0 containers: []
	W1213 10:37:20.433744  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:37:20.433749  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:37:20.433809  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:37:20.459243  947325 cri.go:89] found id: ""
	I1213 10:37:20.459258  947325 logs.go:282] 0 containers: []
	W1213 10:37:20.459265  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:37:20.459270  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:37:20.459328  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:37:20.505295  947325 cri.go:89] found id: ""
	I1213 10:37:20.505310  947325 logs.go:282] 0 containers: []
	W1213 10:37:20.505317  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:37:20.505322  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:37:20.505382  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:37:20.534487  947325 cri.go:89] found id: ""
	I1213 10:37:20.534502  947325 logs.go:282] 0 containers: []
	W1213 10:37:20.534510  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:37:20.534518  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:37:20.534529  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:37:20.562816  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:37:20.562833  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:37:20.626774  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:37:20.626798  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:37:20.642510  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:37:20.642526  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:37:20.716150  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:37:20.707015   11315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:20.707754   11315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:20.709473   11315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:20.710077   11315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:20.711566   11315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:37:20.707015   11315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:20.707754   11315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:20.709473   11315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:20.710077   11315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:20.711566   11315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:37:20.716164  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:37:20.716176  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:37:23.288286  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:23.298705  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:37:23.298766  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:37:23.333024  947325 cri.go:89] found id: ""
	I1213 10:37:23.333038  947325 logs.go:282] 0 containers: []
	W1213 10:37:23.333046  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:37:23.333051  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:37:23.333115  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:37:23.358903  947325 cri.go:89] found id: ""
	I1213 10:37:23.358916  947325 logs.go:282] 0 containers: []
	W1213 10:37:23.358924  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:37:23.358929  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:37:23.358989  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:37:23.384787  947325 cri.go:89] found id: ""
	I1213 10:37:23.384801  947325 logs.go:282] 0 containers: []
	W1213 10:37:23.384808  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:37:23.384812  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:37:23.384871  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:37:23.410002  947325 cri.go:89] found id: ""
	I1213 10:37:23.410036  947325 logs.go:282] 0 containers: []
	W1213 10:37:23.410061  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:37:23.410086  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:37:23.410150  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:37:23.434837  947325 cri.go:89] found id: ""
	I1213 10:37:23.434865  947325 logs.go:282] 0 containers: []
	W1213 10:37:23.434872  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:37:23.434878  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:37:23.434945  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:37:23.464375  947325 cri.go:89] found id: ""
	I1213 10:37:23.464389  947325 logs.go:282] 0 containers: []
	W1213 10:37:23.464396  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:37:23.464402  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:37:23.464472  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:37:23.506074  947325 cri.go:89] found id: ""
	I1213 10:37:23.506089  947325 logs.go:282] 0 containers: []
	W1213 10:37:23.506097  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:37:23.506104  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:37:23.506116  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:37:23.589169  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:37:23.589191  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:37:23.619461  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:37:23.619477  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:37:23.688698  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:37:23.688720  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:37:23.703620  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:37:23.703637  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:37:23.771897  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:37:23.763311   11423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:23.763984   11423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:23.765659   11423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:23.766138   11423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:23.767919   11423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:37:23.763311   11423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:23.763984   11423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:23.765659   11423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:23.766138   11423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:23.767919   11423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:37:26.272169  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:26.282101  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:37:26.282172  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:37:26.308055  947325 cri.go:89] found id: ""
	I1213 10:37:26.308071  947325 logs.go:282] 0 containers: []
	W1213 10:37:26.308078  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:37:26.308086  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:37:26.308147  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:37:26.334700  947325 cri.go:89] found id: ""
	I1213 10:37:26.334722  947325 logs.go:282] 0 containers: []
	W1213 10:37:26.334729  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:37:26.334735  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:37:26.334799  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:37:26.360726  947325 cri.go:89] found id: ""
	I1213 10:37:26.360749  947325 logs.go:282] 0 containers: []
	W1213 10:37:26.360758  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:37:26.360763  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:37:26.360830  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:37:26.385135  947325 cri.go:89] found id: ""
	I1213 10:37:26.385149  947325 logs.go:282] 0 containers: []
	W1213 10:37:26.385157  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:37:26.385162  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:37:26.385233  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:37:26.412837  947325 cri.go:89] found id: ""
	I1213 10:37:26.412851  947325 logs.go:282] 0 containers: []
	W1213 10:37:26.412858  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:37:26.412863  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:37:26.412942  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:37:26.437812  947325 cri.go:89] found id: ""
	I1213 10:37:26.437827  947325 logs.go:282] 0 containers: []
	W1213 10:37:26.437834  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:37:26.437839  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:37:26.437900  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:37:26.463570  947325 cri.go:89] found id: ""
	I1213 10:37:26.463584  947325 logs.go:282] 0 containers: []
	W1213 10:37:26.463592  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:37:26.463600  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:37:26.463611  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:37:26.534802  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:37:26.534823  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:37:26.550643  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:37:26.550658  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:37:26.612829  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:37:26.605210   11516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:26.605795   11516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:26.606999   11516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:26.607456   11516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:26.609002   11516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:37:26.605210   11516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:26.605795   11516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:26.606999   11516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:26.607456   11516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:26.609002   11516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:37:26.612839  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:37:26.612849  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:37:26.681461  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:37:26.681480  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:37:29.210709  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:29.221193  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:37:29.221255  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:37:29.249274  947325 cri.go:89] found id: ""
	I1213 10:37:29.249289  947325 logs.go:282] 0 containers: []
	W1213 10:37:29.249297  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:37:29.249301  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:37:29.249369  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:37:29.276685  947325 cri.go:89] found id: ""
	I1213 10:37:29.276709  947325 logs.go:282] 0 containers: []
	W1213 10:37:29.276718  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:37:29.276723  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:37:29.276788  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:37:29.303267  947325 cri.go:89] found id: ""
	I1213 10:37:29.303281  947325 logs.go:282] 0 containers: []
	W1213 10:37:29.303289  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:37:29.303294  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:37:29.303355  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:37:29.328158  947325 cri.go:89] found id: ""
	I1213 10:37:29.328173  947325 logs.go:282] 0 containers: []
	W1213 10:37:29.328180  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:37:29.328186  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:37:29.328244  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:37:29.355541  947325 cri.go:89] found id: ""
	I1213 10:37:29.355556  947325 logs.go:282] 0 containers: []
	W1213 10:37:29.355565  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:37:29.355570  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:37:29.355627  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:37:29.381411  947325 cri.go:89] found id: ""
	I1213 10:37:29.381426  947325 logs.go:282] 0 containers: []
	W1213 10:37:29.381433  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:37:29.381439  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:37:29.381501  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:37:29.407073  947325 cri.go:89] found id: ""
	I1213 10:37:29.407088  947325 logs.go:282] 0 containers: []
	W1213 10:37:29.407094  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:37:29.407101  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:37:29.407113  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:37:29.422330  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:37:29.422347  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:37:29.498825  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:37:29.490027   11614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:29.491071   11614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:29.492766   11614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:29.493102   11614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:29.494590   11614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:37:29.490027   11614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:29.491071   11614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:29.492766   11614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:29.493102   11614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:29.494590   11614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:37:29.498837  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:37:29.498850  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:37:29.575835  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:37:29.575856  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:37:29.607770  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:37:29.607790  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:37:32.181248  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:32.191812  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:37:32.191876  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:37:32.217211  947325 cri.go:89] found id: ""
	I1213 10:37:32.217225  947325 logs.go:282] 0 containers: []
	W1213 10:37:32.217233  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:37:32.217238  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:37:32.217293  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:37:32.243073  947325 cri.go:89] found id: ""
	I1213 10:37:32.243087  947325 logs.go:282] 0 containers: []
	W1213 10:37:32.243095  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:37:32.243100  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:37:32.243172  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:37:32.272999  947325 cri.go:89] found id: ""
	I1213 10:37:32.273013  947325 logs.go:282] 0 containers: []
	W1213 10:37:32.273020  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:37:32.273025  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:37:32.273084  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:37:32.299079  947325 cri.go:89] found id: ""
	I1213 10:37:32.299092  947325 logs.go:282] 0 containers: []
	W1213 10:37:32.299099  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:37:32.299104  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:37:32.299161  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:37:32.328707  947325 cri.go:89] found id: ""
	I1213 10:37:32.328722  947325 logs.go:282] 0 containers: []
	W1213 10:37:32.328729  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:37:32.328734  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:37:32.328795  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:37:32.354361  947325 cri.go:89] found id: ""
	I1213 10:37:32.354375  947325 logs.go:282] 0 containers: []
	W1213 10:37:32.354382  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:37:32.354388  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:37:32.354448  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:37:32.380069  947325 cri.go:89] found id: ""
	I1213 10:37:32.380083  947325 logs.go:282] 0 containers: []
	W1213 10:37:32.380089  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:37:32.380096  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:37:32.380107  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:37:32.445012  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:37:32.445036  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:37:32.460199  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:37:32.460223  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:37:32.549445  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:37:32.540188   11723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:32.540702   11723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:32.542738   11723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:32.543594   11723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:32.545344   11723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:37:32.540188   11723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:32.540702   11723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:32.542738   11723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:32.543594   11723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:32.545344   11723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:37:32.549456  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:37:32.549467  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:37:32.617595  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:37:32.617617  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:37:35.148911  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:35.159421  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:37:35.159482  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:37:35.186963  947325 cri.go:89] found id: ""
	I1213 10:37:35.186976  947325 logs.go:282] 0 containers: []
	W1213 10:37:35.186984  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:37:35.186989  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:37:35.187046  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:37:35.216128  947325 cri.go:89] found id: ""
	I1213 10:37:35.216142  947325 logs.go:282] 0 containers: []
	W1213 10:37:35.216153  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:37:35.216158  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:37:35.216217  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:37:35.244930  947325 cri.go:89] found id: ""
	I1213 10:37:35.244945  947325 logs.go:282] 0 containers: []
	W1213 10:37:35.244953  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:37:35.244958  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:37:35.245020  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:37:35.270186  947325 cri.go:89] found id: ""
	I1213 10:37:35.270200  947325 logs.go:282] 0 containers: []
	W1213 10:37:35.270207  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:37:35.270212  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:37:35.270268  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:37:35.296166  947325 cri.go:89] found id: ""
	I1213 10:37:35.296180  947325 logs.go:282] 0 containers: []
	W1213 10:37:35.296187  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:37:35.296192  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:37:35.296249  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:37:35.325322  947325 cri.go:89] found id: ""
	I1213 10:37:35.325337  947325 logs.go:282] 0 containers: []
	W1213 10:37:35.325344  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:37:35.325349  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:37:35.325411  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:37:35.350870  947325 cri.go:89] found id: ""
	I1213 10:37:35.350884  947325 logs.go:282] 0 containers: []
	W1213 10:37:35.350892  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:37:35.350900  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:37:35.350911  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:37:35.365840  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:37:35.365857  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:37:35.428973  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:37:35.420649   11826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:35.421481   11826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:35.422989   11826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:35.423556   11826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:35.425084   11826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:37:35.420649   11826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:35.421481   11826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:35.422989   11826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:35.423556   11826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:35.425084   11826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:37:35.428993  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:37:35.429004  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:37:35.497503  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:37:35.497522  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:37:35.530732  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:37:35.530751  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:37:38.099975  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:38.110243  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:37:38.110306  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:37:38.140777  947325 cri.go:89] found id: ""
	I1213 10:37:38.140792  947325 logs.go:282] 0 containers: []
	W1213 10:37:38.140798  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:37:38.140804  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:37:38.140871  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:37:38.167186  947325 cri.go:89] found id: ""
	I1213 10:37:38.167200  947325 logs.go:282] 0 containers: []
	W1213 10:37:38.167207  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:37:38.167212  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:37:38.167276  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:37:38.193305  947325 cri.go:89] found id: ""
	I1213 10:37:38.193318  947325 logs.go:282] 0 containers: []
	W1213 10:37:38.193326  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:37:38.193331  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:37:38.193388  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:37:38.219451  947325 cri.go:89] found id: ""
	I1213 10:37:38.219464  947325 logs.go:282] 0 containers: []
	W1213 10:37:38.219472  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:37:38.219477  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:37:38.219542  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:37:38.249284  947325 cri.go:89] found id: ""
	I1213 10:37:38.249299  947325 logs.go:282] 0 containers: []
	W1213 10:37:38.249306  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:37:38.249311  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:37:38.249380  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:37:38.275451  947325 cri.go:89] found id: ""
	I1213 10:37:38.275464  947325 logs.go:282] 0 containers: []
	W1213 10:37:38.275471  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:37:38.275477  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:37:38.275538  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:37:38.300476  947325 cri.go:89] found id: ""
	I1213 10:37:38.300490  947325 logs.go:282] 0 containers: []
	W1213 10:37:38.300497  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:37:38.300504  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:37:38.300517  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:37:38.366681  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:37:38.366700  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:37:38.381405  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:37:38.381423  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:37:38.441215  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:37:38.434082   11932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:38.434551   11932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:38.435674   11932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:38.436023   11932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:38.437450   11932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:37:38.434082   11932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:38.434551   11932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:38.435674   11932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:38.436023   11932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:38.437450   11932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:37:38.441225  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:37:38.441236  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:37:38.508504  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:37:38.508525  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:37:41.051455  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:41.061451  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:37:41.061522  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:37:41.087312  947325 cri.go:89] found id: ""
	I1213 10:37:41.087331  947325 logs.go:282] 0 containers: []
	W1213 10:37:41.087338  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:37:41.087343  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:37:41.087416  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:37:41.116231  947325 cri.go:89] found id: ""
	I1213 10:37:41.116246  947325 logs.go:282] 0 containers: []
	W1213 10:37:41.116253  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:37:41.116258  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:37:41.116316  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:37:41.147429  947325 cri.go:89] found id: ""
	I1213 10:37:41.147444  947325 logs.go:282] 0 containers: []
	W1213 10:37:41.147451  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:37:41.147457  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:37:41.147516  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:37:41.176551  947325 cri.go:89] found id: ""
	I1213 10:37:41.176565  947325 logs.go:282] 0 containers: []
	W1213 10:37:41.176573  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:37:41.176578  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:37:41.176634  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:37:41.204132  947325 cri.go:89] found id: ""
	I1213 10:37:41.204146  947325 logs.go:282] 0 containers: []
	W1213 10:37:41.204154  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:37:41.204159  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:37:41.204223  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:37:41.230785  947325 cri.go:89] found id: ""
	I1213 10:37:41.230799  947325 logs.go:282] 0 containers: []
	W1213 10:37:41.230807  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:37:41.230813  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:37:41.230880  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:37:41.256411  947325 cri.go:89] found id: ""
	I1213 10:37:41.256425  947325 logs.go:282] 0 containers: []
	W1213 10:37:41.256433  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:37:41.256440  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:37:41.256451  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:37:41.285617  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:37:41.285636  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:37:41.356895  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:37:41.356914  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:37:41.371698  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:37:41.371714  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:37:41.436289  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:37:41.427612   12049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:41.428221   12049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:41.430007   12049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:41.430584   12049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:41.432351   12049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:37:41.427612   12049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:41.428221   12049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:41.430007   12049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:41.430584   12049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:41.432351   12049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:37:41.436299  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:37:41.436309  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:37:44.006670  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:44.021718  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:37:44.021788  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:37:44.048534  947325 cri.go:89] found id: ""
	I1213 10:37:44.048549  947325 logs.go:282] 0 containers: []
	W1213 10:37:44.048565  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:37:44.048571  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:37:44.048674  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:37:44.079425  947325 cri.go:89] found id: ""
	I1213 10:37:44.079439  947325 logs.go:282] 0 containers: []
	W1213 10:37:44.079446  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:37:44.079451  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:37:44.079523  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:37:44.106317  947325 cri.go:89] found id: ""
	I1213 10:37:44.106334  947325 logs.go:282] 0 containers: []
	W1213 10:37:44.106342  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:37:44.106348  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:37:44.106420  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:37:44.132520  947325 cri.go:89] found id: ""
	I1213 10:37:44.132534  947325 logs.go:282] 0 containers: []
	W1213 10:37:44.132553  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:37:44.132558  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:37:44.132628  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:37:44.161205  947325 cri.go:89] found id: ""
	I1213 10:37:44.161219  947325 logs.go:282] 0 containers: []
	W1213 10:37:44.161226  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:37:44.161231  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:37:44.161291  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:37:44.187876  947325 cri.go:89] found id: ""
	I1213 10:37:44.187890  947325 logs.go:282] 0 containers: []
	W1213 10:37:44.187898  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:37:44.187903  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:37:44.187961  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:37:44.215854  947325 cri.go:89] found id: ""
	I1213 10:37:44.215869  947325 logs.go:282] 0 containers: []
	W1213 10:37:44.215876  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:37:44.215884  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:37:44.215894  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:37:44.284854  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:37:44.276025   12136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:44.276798   12136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:44.278330   12136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:44.278909   12136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:44.280546   12136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:37:44.276025   12136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:44.276798   12136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:44.278330   12136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:44.278909   12136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:44.280546   12136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:37:44.284866  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:37:44.284876  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:37:44.355349  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:37:44.355373  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:37:44.384733  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:37:44.384752  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:37:44.453769  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:37:44.453788  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:37:46.969736  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:46.979972  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:37:46.980038  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:37:47.007061  947325 cri.go:89] found id: ""
	I1213 10:37:47.007075  947325 logs.go:282] 0 containers: []
	W1213 10:37:47.007082  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:37:47.007087  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:37:47.007146  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:37:47.036818  947325 cri.go:89] found id: ""
	I1213 10:37:47.036832  947325 logs.go:282] 0 containers: []
	W1213 10:37:47.036858  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:37:47.036863  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:37:47.036921  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:37:47.061328  947325 cri.go:89] found id: ""
	I1213 10:37:47.061342  947325 logs.go:282] 0 containers: []
	W1213 10:37:47.061349  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:37:47.061355  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:37:47.061415  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:37:47.089017  947325 cri.go:89] found id: ""
	I1213 10:37:47.089032  947325 logs.go:282] 0 containers: []
	W1213 10:37:47.089039  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:37:47.089044  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:37:47.089103  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:37:47.114790  947325 cri.go:89] found id: ""
	I1213 10:37:47.114803  947325 logs.go:282] 0 containers: []
	W1213 10:37:47.114810  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:37:47.114817  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:37:47.114877  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:37:47.139554  947325 cri.go:89] found id: ""
	I1213 10:37:47.139575  947325 logs.go:282] 0 containers: []
	W1213 10:37:47.139583  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:37:47.139589  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:37:47.139654  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:37:47.165228  947325 cri.go:89] found id: ""
	I1213 10:37:47.165241  947325 logs.go:282] 0 containers: []
	W1213 10:37:47.165248  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:37:47.165256  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:37:47.165266  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:37:47.232293  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:37:47.232313  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:37:47.261718  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:37:47.261736  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:37:47.331592  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:37:47.331613  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:37:47.345881  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:37:47.345897  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:37:47.412948  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:37:47.404477   12259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:47.405216   12259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:47.406839   12259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:47.407332   12259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:47.409008   12259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:37:47.404477   12259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:47.405216   12259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:47.406839   12259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:47.407332   12259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:47.409008   12259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:37:49.913659  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:49.923942  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:37:49.924005  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:37:49.951850  947325 cri.go:89] found id: ""
	I1213 10:37:49.951863  947325 logs.go:282] 0 containers: []
	W1213 10:37:49.951871  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:37:49.951876  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:37:49.951936  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:37:49.976949  947325 cri.go:89] found id: ""
	I1213 10:37:49.976963  947325 logs.go:282] 0 containers: []
	W1213 10:37:49.976971  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:37:49.976976  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:37:49.977034  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:37:50.020670  947325 cri.go:89] found id: ""
	I1213 10:37:50.020686  947325 logs.go:282] 0 containers: []
	W1213 10:37:50.020693  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:37:50.020698  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:37:50.020779  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:37:50.048299  947325 cri.go:89] found id: ""
	I1213 10:37:50.048316  947325 logs.go:282] 0 containers: []
	W1213 10:37:50.048323  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:37:50.048328  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:37:50.048397  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:37:50.075060  947325 cri.go:89] found id: ""
	I1213 10:37:50.075074  947325 logs.go:282] 0 containers: []
	W1213 10:37:50.075081  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:37:50.075087  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:37:50.075148  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:37:50.104579  947325 cri.go:89] found id: ""
	I1213 10:37:50.104593  947325 logs.go:282] 0 containers: []
	W1213 10:37:50.104601  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:37:50.104607  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:37:50.104666  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:37:50.132679  947325 cri.go:89] found id: ""
	I1213 10:37:50.132693  947325 logs.go:282] 0 containers: []
	W1213 10:37:50.132701  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:37:50.132714  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:37:50.132725  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:37:50.197209  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:37:50.187857   12346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:50.188686   12346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:50.190498   12346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:50.191212   12346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:50.192792   12346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:37:50.187857   12346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:50.188686   12346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:50.190498   12346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:50.191212   12346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:50.192792   12346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:37:50.197219  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:37:50.197230  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:37:50.267157  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:37:50.267176  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:37:50.297061  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:37:50.297077  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:37:50.363929  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:37:50.363950  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:37:52.879245  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:52.889673  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:37:52.889741  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:37:52.914746  947325 cri.go:89] found id: ""
	I1213 10:37:52.914768  947325 logs.go:282] 0 containers: []
	W1213 10:37:52.914776  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:37:52.914781  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:37:52.914845  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:37:52.941523  947325 cri.go:89] found id: ""
	I1213 10:37:52.941554  947325 logs.go:282] 0 containers: []
	W1213 10:37:52.941562  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:37:52.941567  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:37:52.941623  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:37:52.967010  947325 cri.go:89] found id: ""
	I1213 10:37:52.967027  947325 logs.go:282] 0 containers: []
	W1213 10:37:52.967035  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:37:52.967040  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:37:52.967141  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:37:52.992300  947325 cri.go:89] found id: ""
	I1213 10:37:52.992313  947325 logs.go:282] 0 containers: []
	W1213 10:37:52.992321  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:37:52.992326  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:37:52.992386  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:37:53.020045  947325 cri.go:89] found id: ""
	I1213 10:37:53.020058  947325 logs.go:282] 0 containers: []
	W1213 10:37:53.020074  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:37:53.020081  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:37:53.020140  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:37:53.053897  947325 cri.go:89] found id: ""
	I1213 10:37:53.053911  947325 logs.go:282] 0 containers: []
	W1213 10:37:53.053918  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:37:53.053923  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:37:53.053982  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:37:53.079867  947325 cri.go:89] found id: ""
	I1213 10:37:53.079882  947325 logs.go:282] 0 containers: []
	W1213 10:37:53.079890  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:37:53.079897  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:37:53.079908  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:37:53.144913  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:37:53.144932  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:37:53.159844  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:37:53.159861  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:37:53.226427  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:37:53.218433   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:53.219033   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:53.220531   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:53.221108   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:53.222548   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:37:53.218433   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:53.219033   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:53.220531   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:53.221108   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:53.222548   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:37:53.226436  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:37:53.226447  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:37:53.294490  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:37:53.294510  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:37:55.827710  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:55.837950  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:37:55.838028  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:37:55.863239  947325 cri.go:89] found id: ""
	I1213 10:37:55.863253  947325 logs.go:282] 0 containers: []
	W1213 10:37:55.863260  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:37:55.863265  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:37:55.863331  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:37:55.892876  947325 cri.go:89] found id: ""
	I1213 10:37:55.892890  947325 logs.go:282] 0 containers: []
	W1213 10:37:55.892897  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:37:55.892902  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:37:55.892962  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:37:55.919038  947325 cri.go:89] found id: ""
	I1213 10:37:55.919051  947325 logs.go:282] 0 containers: []
	W1213 10:37:55.919059  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:37:55.919064  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:37:55.919123  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:37:55.944982  947325 cri.go:89] found id: ""
	I1213 10:37:55.944997  947325 logs.go:282] 0 containers: []
	W1213 10:37:55.945004  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:37:55.945009  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:37:55.945066  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:37:55.974750  947325 cri.go:89] found id: ""
	I1213 10:37:55.974764  947325 logs.go:282] 0 containers: []
	W1213 10:37:55.974771  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:37:55.974776  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:37:55.974836  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:37:56.006337  947325 cri.go:89] found id: ""
	I1213 10:37:56.006352  947325 logs.go:282] 0 containers: []
	W1213 10:37:56.006360  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:37:56.006365  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:37:56.006429  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:37:56.033183  947325 cri.go:89] found id: ""
	I1213 10:37:56.033199  947325 logs.go:282] 0 containers: []
	W1213 10:37:56.033206  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:37:56.033214  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:37:56.033225  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:37:56.098781  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:37:56.098801  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:37:56.113910  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:37:56.113933  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:37:56.179999  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:37:56.172125   12565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:56.172668   12565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:56.174227   12565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:56.174819   12565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:56.176271   12565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:37:56.172125   12565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:56.172668   12565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:56.174227   12565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:56.174819   12565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:56.176271   12565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:37:56.180009  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:37:56.180020  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:37:56.248249  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:37:56.248271  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:37:58.777669  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:58.788383  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:37:58.788443  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:37:58.815846  947325 cri.go:89] found id: ""
	I1213 10:37:58.815861  947325 logs.go:282] 0 containers: []
	W1213 10:37:58.815868  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:37:58.815873  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:37:58.815933  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:37:58.845912  947325 cri.go:89] found id: ""
	I1213 10:37:58.845926  947325 logs.go:282] 0 containers: []
	W1213 10:37:58.845933  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:37:58.845938  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:37:58.846003  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:37:58.870933  947325 cri.go:89] found id: ""
	I1213 10:37:58.870947  947325 logs.go:282] 0 containers: []
	W1213 10:37:58.870954  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:37:58.870959  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:37:58.871017  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:37:58.900972  947325 cri.go:89] found id: ""
	I1213 10:37:58.900986  947325 logs.go:282] 0 containers: []
	W1213 10:37:58.900993  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:37:58.900998  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:37:58.901054  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:37:58.926234  947325 cri.go:89] found id: ""
	I1213 10:37:58.926257  947325 logs.go:282] 0 containers: []
	W1213 10:37:58.926266  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:37:58.926271  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:37:58.926338  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:37:58.951314  947325 cri.go:89] found id: ""
	I1213 10:37:58.951328  947325 logs.go:282] 0 containers: []
	W1213 10:37:58.951335  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:37:58.951340  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:37:58.951398  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:37:58.981974  947325 cri.go:89] found id: ""
	I1213 10:37:58.981989  947325 logs.go:282] 0 containers: []
	W1213 10:37:58.981996  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:37:58.982003  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:37:58.982014  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:37:59.047152  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:37:59.047172  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:37:59.062001  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:37:59.062019  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:37:59.127736  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:37:59.119615   12667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:59.120166   12667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:59.121736   12667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:59.122383   12667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:59.123935   12667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:37:59.119615   12667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:59.120166   12667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:59.121736   12667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:59.122383   12667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:59.123935   12667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:37:59.127748  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:37:59.127759  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:37:59.196288  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:37:59.196308  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:38:01.726269  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:01.738227  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:38:01.738290  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:38:01.765402  947325 cri.go:89] found id: ""
	I1213 10:38:01.765416  947325 logs.go:282] 0 containers: []
	W1213 10:38:01.765423  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:38:01.765428  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:38:01.765487  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:38:01.797073  947325 cri.go:89] found id: ""
	I1213 10:38:01.797087  947325 logs.go:282] 0 containers: []
	W1213 10:38:01.797094  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:38:01.797105  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:38:01.797165  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:38:01.822923  947325 cri.go:89] found id: ""
	I1213 10:38:01.822936  947325 logs.go:282] 0 containers: []
	W1213 10:38:01.822943  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:38:01.822948  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:38:01.823004  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:38:01.847458  947325 cri.go:89] found id: ""
	I1213 10:38:01.847472  947325 logs.go:282] 0 containers: []
	W1213 10:38:01.847479  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:38:01.847484  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:38:01.847542  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:38:01.876363  947325 cri.go:89] found id: ""
	I1213 10:38:01.876376  947325 logs.go:282] 0 containers: []
	W1213 10:38:01.876383  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:38:01.876388  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:38:01.876445  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:38:01.901894  947325 cri.go:89] found id: ""
	I1213 10:38:01.901908  947325 logs.go:282] 0 containers: []
	W1213 10:38:01.901915  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:38:01.901920  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:38:01.901977  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:38:01.927538  947325 cri.go:89] found id: ""
	I1213 10:38:01.927556  947325 logs.go:282] 0 containers: []
	W1213 10:38:01.927563  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:38:01.927571  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:38:01.927585  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:38:01.993043  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:38:01.993063  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:38:02.009861  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:38:02.009878  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:38:02.079070  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:38:02.070348   12772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:02.071182   12772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:02.072918   12772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:02.073701   12772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:02.074834   12772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:38:02.070348   12772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:02.071182   12772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:02.072918   12772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:02.073701   12772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:02.074834   12772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:38:02.079087  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:38:02.079097  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:38:02.150335  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:38:02.150355  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:38:04.680156  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:04.690471  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:38:04.690534  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:38:04.717027  947325 cri.go:89] found id: ""
	I1213 10:38:04.717042  947325 logs.go:282] 0 containers: []
	W1213 10:38:04.717049  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:38:04.717055  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:38:04.717116  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:38:04.751100  947325 cri.go:89] found id: ""
	I1213 10:38:04.751114  947325 logs.go:282] 0 containers: []
	W1213 10:38:04.751121  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:38:04.751126  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:38:04.751185  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:38:04.785118  947325 cri.go:89] found id: ""
	I1213 10:38:04.785133  947325 logs.go:282] 0 containers: []
	W1213 10:38:04.785140  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:38:04.785145  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:38:04.785206  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:38:04.811838  947325 cri.go:89] found id: ""
	I1213 10:38:04.811852  947325 logs.go:282] 0 containers: []
	W1213 10:38:04.811859  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:38:04.811864  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:38:04.811924  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:38:04.837476  947325 cri.go:89] found id: ""
	I1213 10:38:04.837489  947325 logs.go:282] 0 containers: []
	W1213 10:38:04.837497  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:38:04.837502  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:38:04.837589  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:38:04.863616  947325 cri.go:89] found id: ""
	I1213 10:38:04.863630  947325 logs.go:282] 0 containers: []
	W1213 10:38:04.863637  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:38:04.863642  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:38:04.864028  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:38:04.897282  947325 cri.go:89] found id: ""
	I1213 10:38:04.897297  947325 logs.go:282] 0 containers: []
	W1213 10:38:04.897304  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:38:04.897311  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:38:04.897322  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:38:04.970089  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:38:04.970112  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:38:04.998787  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:38:04.998808  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:38:05.071114  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:38:05.071136  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:38:05.086764  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:38:05.086780  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:38:05.152705  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:38:05.144665   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:05.145255   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:05.146845   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:05.147330   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:05.148849   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:38:05.144665   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:05.145255   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:05.146845   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:05.147330   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:05.148849   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:38:07.652961  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:07.663190  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:38:07.663256  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:38:07.687596  947325 cri.go:89] found id: ""
	I1213 10:38:07.687611  947325 logs.go:282] 0 containers: []
	W1213 10:38:07.687619  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:38:07.687624  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:38:07.687682  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:38:07.712358  947325 cri.go:89] found id: ""
	I1213 10:38:07.712372  947325 logs.go:282] 0 containers: []
	W1213 10:38:07.712379  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:38:07.712384  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:38:07.712443  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:38:07.747606  947325 cri.go:89] found id: ""
	I1213 10:38:07.747620  947325 logs.go:282] 0 containers: []
	W1213 10:38:07.747627  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:38:07.747632  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:38:07.747686  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:38:07.779928  947325 cri.go:89] found id: ""
	I1213 10:38:07.779942  947325 logs.go:282] 0 containers: []
	W1213 10:38:07.779949  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:38:07.779954  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:38:07.780010  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:38:07.809892  947325 cri.go:89] found id: ""
	I1213 10:38:07.809905  947325 logs.go:282] 0 containers: []
	W1213 10:38:07.809912  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:38:07.809917  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:38:07.809976  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:38:07.835954  947325 cri.go:89] found id: ""
	I1213 10:38:07.835969  947325 logs.go:282] 0 containers: []
	W1213 10:38:07.835977  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:38:07.835983  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:38:07.836045  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:38:07.863613  947325 cri.go:89] found id: ""
	I1213 10:38:07.863628  947325 logs.go:282] 0 containers: []
	W1213 10:38:07.863635  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:38:07.863643  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:38:07.863653  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:38:07.934015  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:38:07.934035  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:38:07.949065  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:38:07.949082  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:38:08.016099  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:38:08.006616   12984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:08.007565   12984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:08.009216   12984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:08.009606   12984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:08.011135   12984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:38:08.006616   12984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:08.007565   12984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:08.009216   12984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:08.009606   12984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:08.011135   12984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:38:08.016110  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:38:08.016120  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:38:08.086624  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:38:08.086643  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:38:10.620779  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:10.631455  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:38:10.631519  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:38:10.657004  947325 cri.go:89] found id: ""
	I1213 10:38:10.657018  947325 logs.go:282] 0 containers: []
	W1213 10:38:10.657025  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:38:10.657031  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:38:10.657091  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:38:10.682863  947325 cri.go:89] found id: ""
	I1213 10:38:10.682879  947325 logs.go:282] 0 containers: []
	W1213 10:38:10.682887  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:38:10.682892  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:38:10.682952  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:38:10.710656  947325 cri.go:89] found id: ""
	I1213 10:38:10.710671  947325 logs.go:282] 0 containers: []
	W1213 10:38:10.710678  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:38:10.710684  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:38:10.710744  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:38:10.751941  947325 cri.go:89] found id: ""
	I1213 10:38:10.751955  947325 logs.go:282] 0 containers: []
	W1213 10:38:10.751962  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:38:10.751967  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:38:10.752027  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:38:10.784379  947325 cri.go:89] found id: ""
	I1213 10:38:10.784393  947325 logs.go:282] 0 containers: []
	W1213 10:38:10.784400  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:38:10.784405  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:38:10.784462  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:38:10.812194  947325 cri.go:89] found id: ""
	I1213 10:38:10.812208  947325 logs.go:282] 0 containers: []
	W1213 10:38:10.812215  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:38:10.812220  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:38:10.812279  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:38:10.837693  947325 cri.go:89] found id: ""
	I1213 10:38:10.837706  947325 logs.go:282] 0 containers: []
	W1213 10:38:10.837714  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:38:10.837721  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:38:10.837732  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:38:10.903946  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:38:10.903965  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:38:10.918956  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:38:10.918972  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:38:10.991627  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:38:10.983406   13089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:10.984077   13089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:10.985359   13089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:10.985915   13089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:10.987398   13089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:38:10.983406   13089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:10.984077   13089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:10.985359   13089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:10.985915   13089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:10.987398   13089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:38:10.991638  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:38:10.991648  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:38:11.064139  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:38:11.064160  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:38:13.600555  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:13.610666  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:38:13.610728  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:38:13.635608  947325 cri.go:89] found id: ""
	I1213 10:38:13.635622  947325 logs.go:282] 0 containers: []
	W1213 10:38:13.635629  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:38:13.635635  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:38:13.635694  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:38:13.660494  947325 cri.go:89] found id: ""
	I1213 10:38:13.660509  947325 logs.go:282] 0 containers: []
	W1213 10:38:13.660516  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:38:13.660521  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:38:13.660580  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:38:13.686792  947325 cri.go:89] found id: ""
	I1213 10:38:13.686807  947325 logs.go:282] 0 containers: []
	W1213 10:38:13.686814  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:38:13.686820  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:38:13.686877  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:38:13.712337  947325 cri.go:89] found id: ""
	I1213 10:38:13.712351  947325 logs.go:282] 0 containers: []
	W1213 10:38:13.712358  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:38:13.712364  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:38:13.712421  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:38:13.751688  947325 cri.go:89] found id: ""
	I1213 10:38:13.751703  947325 logs.go:282] 0 containers: []
	W1213 10:38:13.751710  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:38:13.751716  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:38:13.751771  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:38:13.778873  947325 cri.go:89] found id: ""
	I1213 10:38:13.778886  947325 logs.go:282] 0 containers: []
	W1213 10:38:13.778893  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:38:13.778898  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:38:13.778955  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:38:13.808036  947325 cri.go:89] found id: ""
	I1213 10:38:13.808050  947325 logs.go:282] 0 containers: []
	W1213 10:38:13.808057  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:38:13.808065  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:38:13.808081  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:38:13.874152  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:38:13.864618   13192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:13.865871   13192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:13.866606   13192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:13.868278   13192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:13.868976   13192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:38:13.864618   13192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:13.865871   13192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:13.866606   13192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:13.868278   13192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:13.868976   13192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:38:13.874162  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:38:13.874173  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:38:13.943404  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:38:13.943424  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:38:13.971540  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:38:13.971557  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:38:14.040558  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:38:14.040581  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:38:16.556175  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:16.566366  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:38:16.566428  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:38:16.591757  947325 cri.go:89] found id: ""
	I1213 10:38:16.591772  947325 logs.go:282] 0 containers: []
	W1213 10:38:16.591779  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:38:16.591785  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:38:16.591842  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:38:16.617244  947325 cri.go:89] found id: ""
	I1213 10:38:16.617259  947325 logs.go:282] 0 containers: []
	W1213 10:38:16.617266  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:38:16.617271  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:38:16.617329  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:38:16.644168  947325 cri.go:89] found id: ""
	I1213 10:38:16.644182  947325 logs.go:282] 0 containers: []
	W1213 10:38:16.644189  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:38:16.644194  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:38:16.644253  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:38:16.673646  947325 cri.go:89] found id: ""
	I1213 10:38:16.673659  947325 logs.go:282] 0 containers: []
	W1213 10:38:16.673666  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:38:16.673671  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:38:16.673729  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:38:16.698771  947325 cri.go:89] found id: ""
	I1213 10:38:16.698785  947325 logs.go:282] 0 containers: []
	W1213 10:38:16.698793  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:38:16.698798  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:38:16.698857  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:38:16.726980  947325 cri.go:89] found id: ""
	I1213 10:38:16.726994  947325 logs.go:282] 0 containers: []
	W1213 10:38:16.727001  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:38:16.727006  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:38:16.727066  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:38:16.773642  947325 cri.go:89] found id: ""
	I1213 10:38:16.773657  947325 logs.go:282] 0 containers: []
	W1213 10:38:16.773665  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:38:16.773673  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:38:16.773685  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:38:16.807643  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:38:16.807660  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:38:16.874674  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:38:16.874698  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:38:16.890281  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:38:16.890299  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:38:16.958318  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:38:16.949056   13311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:16.950510   13311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:16.951914   13311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:16.952759   13311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:16.954416   13311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:38:16.949056   13311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:16.950510   13311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:16.951914   13311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:16.952759   13311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:16.954416   13311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:38:16.958330  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:38:16.958343  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:38:19.528319  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:19.539728  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:38:19.539789  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:38:19.571108  947325 cri.go:89] found id: ""
	I1213 10:38:19.571121  947325 logs.go:282] 0 containers: []
	W1213 10:38:19.571129  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:38:19.571134  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:38:19.571194  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:38:19.597765  947325 cri.go:89] found id: ""
	I1213 10:38:19.597779  947325 logs.go:282] 0 containers: []
	W1213 10:38:19.597787  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:38:19.597792  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:38:19.597853  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:38:19.623110  947325 cri.go:89] found id: ""
	I1213 10:38:19.623124  947325 logs.go:282] 0 containers: []
	W1213 10:38:19.623137  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:38:19.623142  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:38:19.623204  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:38:19.648553  947325 cri.go:89] found id: ""
	I1213 10:38:19.648568  947325 logs.go:282] 0 containers: []
	W1213 10:38:19.648575  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:38:19.648580  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:38:19.648652  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:38:19.674550  947325 cri.go:89] found id: ""
	I1213 10:38:19.674565  947325 logs.go:282] 0 containers: []
	W1213 10:38:19.674572  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:38:19.674577  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:38:19.674635  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:38:19.704458  947325 cri.go:89] found id: ""
	I1213 10:38:19.704473  947325 logs.go:282] 0 containers: []
	W1213 10:38:19.704480  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:38:19.704486  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:38:19.704560  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:38:19.742545  947325 cri.go:89] found id: ""
	I1213 10:38:19.742559  947325 logs.go:282] 0 containers: []
	W1213 10:38:19.742566  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:38:19.742573  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:38:19.742584  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:38:19.818214  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:38:19.818236  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:38:19.833741  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:38:19.833757  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:38:19.899700  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:38:19.891381   13404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:19.892146   13404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:19.893293   13404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:19.893921   13404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:19.895742   13404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:38:19.891381   13404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:19.892146   13404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:19.893293   13404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:19.893921   13404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:19.895742   13404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:38:19.899710  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:38:19.899731  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:38:19.969264  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:38:19.969284  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:38:22.501918  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:22.513303  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:38:22.513368  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:38:22.542006  947325 cri.go:89] found id: ""
	I1213 10:38:22.542020  947325 logs.go:282] 0 containers: []
	W1213 10:38:22.542028  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:38:22.542033  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:38:22.542109  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:38:22.572046  947325 cri.go:89] found id: ""
	I1213 10:38:22.572061  947325 logs.go:282] 0 containers: []
	W1213 10:38:22.572068  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:38:22.572073  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:38:22.572131  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:38:22.599640  947325 cri.go:89] found id: ""
	I1213 10:38:22.599654  947325 logs.go:282] 0 containers: []
	W1213 10:38:22.599660  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:38:22.599665  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:38:22.599728  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:38:22.628632  947325 cri.go:89] found id: ""
	I1213 10:38:22.628646  947325 logs.go:282] 0 containers: []
	W1213 10:38:22.628653  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:38:22.628658  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:38:22.628717  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:38:22.655032  947325 cri.go:89] found id: ""
	I1213 10:38:22.655046  947325 logs.go:282] 0 containers: []
	W1213 10:38:22.655053  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:38:22.655058  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:38:22.655119  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:38:22.682403  947325 cri.go:89] found id: ""
	I1213 10:38:22.682422  947325 logs.go:282] 0 containers: []
	W1213 10:38:22.682431  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:38:22.682436  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:38:22.682511  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:38:22.709263  947325 cri.go:89] found id: ""
	I1213 10:38:22.709277  947325 logs.go:282] 0 containers: []
	W1213 10:38:22.709286  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:38:22.709293  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:38:22.709307  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:38:22.748554  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:38:22.748573  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:38:22.820355  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:38:22.820376  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:38:22.836069  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:38:22.836100  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:38:22.902594  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:38:22.894546   13522 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:22.895165   13522 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:22.896717   13522 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:22.897250   13522 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:22.898679   13522 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:38:22.894546   13522 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:22.895165   13522 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:22.896717   13522 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:22.897250   13522 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:22.898679   13522 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:38:22.902605  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:38:22.902616  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:38:25.474313  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:25.484536  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:38:25.484600  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:38:25.512648  947325 cri.go:89] found id: ""
	I1213 10:38:25.512662  947325 logs.go:282] 0 containers: []
	W1213 10:38:25.512670  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:38:25.512675  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:38:25.512736  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:38:25.545720  947325 cri.go:89] found id: ""
	I1213 10:38:25.545739  947325 logs.go:282] 0 containers: []
	W1213 10:38:25.545746  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:38:25.545752  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:38:25.545821  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:38:25.572807  947325 cri.go:89] found id: ""
	I1213 10:38:25.572820  947325 logs.go:282] 0 containers: []
	W1213 10:38:25.572827  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:38:25.572832  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:38:25.572890  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:38:25.597850  947325 cri.go:89] found id: ""
	I1213 10:38:25.597864  947325 logs.go:282] 0 containers: []
	W1213 10:38:25.597871  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:38:25.597876  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:38:25.597939  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:38:25.622944  947325 cri.go:89] found id: ""
	I1213 10:38:25.622958  947325 logs.go:282] 0 containers: []
	W1213 10:38:25.622965  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:38:25.622971  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:38:25.623030  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:38:25.647255  947325 cri.go:89] found id: ""
	I1213 10:38:25.647268  947325 logs.go:282] 0 containers: []
	W1213 10:38:25.647276  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:38:25.647281  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:38:25.647339  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:38:25.672821  947325 cri.go:89] found id: ""
	I1213 10:38:25.672837  947325 logs.go:282] 0 containers: []
	W1213 10:38:25.672844  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:38:25.672864  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:38:25.672875  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:38:25.744377  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:38:25.744397  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:38:25.773682  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:38:25.773699  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:38:25.843372  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:38:25.843396  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:38:25.858420  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:38:25.858437  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:38:25.923733  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:38:25.915727   13633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:25.916379   13633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:25.917934   13633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:25.918499   13633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:25.919915   13633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:38:25.915727   13633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:25.916379   13633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:25.917934   13633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:25.918499   13633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:25.919915   13633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:38:28.424008  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:28.434425  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:38:28.434490  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:38:28.459479  947325 cri.go:89] found id: ""
	I1213 10:38:28.459493  947325 logs.go:282] 0 containers: []
	W1213 10:38:28.459501  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:38:28.459506  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:38:28.459569  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:38:28.488343  947325 cri.go:89] found id: ""
	I1213 10:38:28.488357  947325 logs.go:282] 0 containers: []
	W1213 10:38:28.488365  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:38:28.488370  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:38:28.488431  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:38:28.513634  947325 cri.go:89] found id: ""
	I1213 10:38:28.513649  947325 logs.go:282] 0 containers: []
	W1213 10:38:28.513656  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:38:28.513661  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:38:28.513719  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:38:28.540169  947325 cri.go:89] found id: ""
	I1213 10:38:28.540182  947325 logs.go:282] 0 containers: []
	W1213 10:38:28.540190  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:38:28.540195  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:38:28.540253  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:38:28.564331  947325 cri.go:89] found id: ""
	I1213 10:38:28.564344  947325 logs.go:282] 0 containers: []
	W1213 10:38:28.564351  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:38:28.564356  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:38:28.564415  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:38:28.592829  947325 cri.go:89] found id: ""
	I1213 10:38:28.592844  947325 logs.go:282] 0 containers: []
	W1213 10:38:28.592851  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:38:28.592856  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:38:28.592913  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:38:28.618020  947325 cri.go:89] found id: ""
	I1213 10:38:28.618035  947325 logs.go:282] 0 containers: []
	W1213 10:38:28.618044  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:38:28.618052  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:38:28.618063  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:38:28.685306  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:38:28.685326  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:38:28.713761  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:38:28.713779  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:38:28.794463  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:38:28.794484  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:38:28.809677  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:38:28.809696  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:38:28.870924  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:38:28.863257   13740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:28.863803   13740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:28.864955   13740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:28.865616   13740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:28.867097   13740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:38:28.863257   13740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:28.863803   13740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:28.864955   13740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:28.865616   13740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:28.867097   13740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:38:31.371199  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:31.381501  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:38:31.381583  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:38:31.408362  947325 cri.go:89] found id: ""
	I1213 10:38:31.408376  947325 logs.go:282] 0 containers: []
	W1213 10:38:31.408383  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:38:31.408388  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:38:31.408454  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:38:31.434743  947325 cri.go:89] found id: ""
	I1213 10:38:31.434758  947325 logs.go:282] 0 containers: []
	W1213 10:38:31.434766  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:38:31.434772  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:38:31.434831  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:38:31.467710  947325 cri.go:89] found id: ""
	I1213 10:38:31.467724  947325 logs.go:282] 0 containers: []
	W1213 10:38:31.467731  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:38:31.467736  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:38:31.467795  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:38:31.493177  947325 cri.go:89] found id: ""
	I1213 10:38:31.493191  947325 logs.go:282] 0 containers: []
	W1213 10:38:31.493198  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:38:31.493203  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:38:31.493263  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:38:31.517966  947325 cri.go:89] found id: ""
	I1213 10:38:31.517980  947325 logs.go:282] 0 containers: []
	W1213 10:38:31.517987  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:38:31.517992  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:38:31.518057  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:38:31.542186  947325 cri.go:89] found id: ""
	I1213 10:38:31.542201  947325 logs.go:282] 0 containers: []
	W1213 10:38:31.542208  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:38:31.542213  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:38:31.542270  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:38:31.567569  947325 cri.go:89] found id: ""
	I1213 10:38:31.567583  947325 logs.go:282] 0 containers: []
	W1213 10:38:31.567590  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:38:31.567598  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:38:31.567609  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:38:31.633128  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:38:31.633147  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:38:31.647898  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:38:31.647916  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:38:31.713585  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:38:31.704990   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:31.706015   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:31.707614   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:31.708200   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:31.709708   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:38:31.704990   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:31.706015   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:31.707614   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:31.708200   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:31.709708   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:38:31.713595  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:38:31.713606  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:38:31.784338  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:38:31.784357  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:38:34.315454  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:34.327061  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:38:34.327130  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:38:34.356795  947325 cri.go:89] found id: ""
	I1213 10:38:34.356809  947325 logs.go:282] 0 containers: []
	W1213 10:38:34.356817  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:38:34.356822  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:38:34.356892  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:38:34.384789  947325 cri.go:89] found id: ""
	I1213 10:38:34.384804  947325 logs.go:282] 0 containers: []
	W1213 10:38:34.384812  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:38:34.384817  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:38:34.384907  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:38:34.410778  947325 cri.go:89] found id: ""
	I1213 10:38:34.410791  947325 logs.go:282] 0 containers: []
	W1213 10:38:34.410799  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:38:34.410804  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:38:34.410861  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:38:34.440426  947325 cri.go:89] found id: ""
	I1213 10:38:34.440440  947325 logs.go:282] 0 containers: []
	W1213 10:38:34.440454  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:38:34.440459  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:38:34.440514  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:38:34.465148  947325 cri.go:89] found id: ""
	I1213 10:38:34.465162  947325 logs.go:282] 0 containers: []
	W1213 10:38:34.465170  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:38:34.465175  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:38:34.465236  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:38:34.491230  947325 cri.go:89] found id: ""
	I1213 10:38:34.491245  947325 logs.go:282] 0 containers: []
	W1213 10:38:34.491253  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:38:34.491259  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:38:34.491364  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:38:34.520190  947325 cri.go:89] found id: ""
	I1213 10:38:34.520205  947325 logs.go:282] 0 containers: []
	W1213 10:38:34.520213  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:38:34.520220  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:38:34.520235  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:38:34.552635  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:38:34.552652  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:38:34.617894  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:38:34.617914  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:38:34.632507  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:38:34.632528  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:38:34.697693  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:38:34.688967   13939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:34.689672   13939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:34.691242   13939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:34.691552   13939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:34.693083   13939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:38:34.688967   13939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:34.689672   13939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:34.691242   13939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:34.691552   13939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:34.693083   13939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:38:34.697704  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:38:34.697715  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:38:37.276776  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:37.287236  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:38:37.287306  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:38:37.314091  947325 cri.go:89] found id: ""
	I1213 10:38:37.314105  947325 logs.go:282] 0 containers: []
	W1213 10:38:37.314112  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:38:37.314118  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:38:37.314180  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:38:37.343079  947325 cri.go:89] found id: ""
	I1213 10:38:37.343092  947325 logs.go:282] 0 containers: []
	W1213 10:38:37.343099  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:38:37.343104  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:38:37.343162  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:38:37.371406  947325 cri.go:89] found id: ""
	I1213 10:38:37.371420  947325 logs.go:282] 0 containers: []
	W1213 10:38:37.371428  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:38:37.371432  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:38:37.371489  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:38:37.400383  947325 cri.go:89] found id: ""
	I1213 10:38:37.400398  947325 logs.go:282] 0 containers: []
	W1213 10:38:37.400405  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:38:37.400415  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:38:37.400473  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:38:37.432217  947325 cri.go:89] found id: ""
	I1213 10:38:37.432232  947325 logs.go:282] 0 containers: []
	W1213 10:38:37.432240  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:38:37.432245  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:38:37.432306  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:38:37.459687  947325 cri.go:89] found id: ""
	I1213 10:38:37.459701  947325 logs.go:282] 0 containers: []
	W1213 10:38:37.459708  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:38:37.459713  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:38:37.459771  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:38:37.491295  947325 cri.go:89] found id: ""
	I1213 10:38:37.491309  947325 logs.go:282] 0 containers: []
	W1213 10:38:37.491316  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:38:37.491324  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:38:37.491335  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:38:37.569044  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:38:37.569068  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:38:37.598399  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:38:37.598416  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:38:37.669854  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:38:37.669873  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:38:37.685001  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:38:37.685024  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:38:37.764039  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:38:37.754418   14047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:37.755520   14047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:37.757588   14047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:37.758501   14047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:37.759525   14047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:38:37.754418   14047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:37.755520   14047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:37.757588   14047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:37.758501   14047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:37.759525   14047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:38:40.265130  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:40.276597  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:38:40.276660  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:38:40.301799  947325 cri.go:89] found id: ""
	I1213 10:38:40.301815  947325 logs.go:282] 0 containers: []
	W1213 10:38:40.301822  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:38:40.301828  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:38:40.301884  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:38:40.328096  947325 cri.go:89] found id: ""
	I1213 10:38:40.328110  947325 logs.go:282] 0 containers: []
	W1213 10:38:40.328117  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:38:40.328122  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:38:40.328180  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:38:40.352505  947325 cri.go:89] found id: ""
	I1213 10:38:40.352520  947325 logs.go:282] 0 containers: []
	W1213 10:38:40.352527  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:38:40.352532  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:38:40.352592  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:38:40.381218  947325 cri.go:89] found id: ""
	I1213 10:38:40.381233  947325 logs.go:282] 0 containers: []
	W1213 10:38:40.381240  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:38:40.381245  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:38:40.381303  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:38:40.406747  947325 cri.go:89] found id: ""
	I1213 10:38:40.406761  947325 logs.go:282] 0 containers: []
	W1213 10:38:40.406769  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:38:40.406774  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:38:40.406836  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:38:40.432179  947325 cri.go:89] found id: ""
	I1213 10:38:40.432193  947325 logs.go:282] 0 containers: []
	W1213 10:38:40.432200  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:38:40.432230  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:38:40.432294  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:38:40.457241  947325 cri.go:89] found id: ""
	I1213 10:38:40.457256  947325 logs.go:282] 0 containers: []
	W1213 10:38:40.457263  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:38:40.457270  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:38:40.457281  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:38:40.485384  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:38:40.485400  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:38:40.553931  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:38:40.553950  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:38:40.568552  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:38:40.568568  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:38:40.631691  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:38:40.623997   14152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:40.624643   14152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:40.626097   14152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:40.626582   14152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:40.628021   14152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:38:40.623997   14152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:40.624643   14152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:40.626097   14152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:40.626582   14152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:40.628021   14152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:38:40.631701  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:38:40.631711  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:38:43.202405  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:43.212618  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:38:43.212681  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:38:43.237960  947325 cri.go:89] found id: ""
	I1213 10:38:43.237975  947325 logs.go:282] 0 containers: []
	W1213 10:38:43.237981  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:38:43.237986  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:38:43.238046  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:38:43.262400  947325 cri.go:89] found id: ""
	I1213 10:38:43.262415  947325 logs.go:282] 0 containers: []
	W1213 10:38:43.262422  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:38:43.262427  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:38:43.262485  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:38:43.287113  947325 cri.go:89] found id: ""
	I1213 10:38:43.287126  947325 logs.go:282] 0 containers: []
	W1213 10:38:43.287133  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:38:43.287138  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:38:43.287194  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:38:43.311437  947325 cri.go:89] found id: ""
	I1213 10:38:43.311451  947325 logs.go:282] 0 containers: []
	W1213 10:38:43.311459  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:38:43.311464  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:38:43.311520  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:38:43.338038  947325 cri.go:89] found id: ""
	I1213 10:38:43.338052  947325 logs.go:282] 0 containers: []
	W1213 10:38:43.338059  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:38:43.338066  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:38:43.338125  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:38:43.363248  947325 cri.go:89] found id: ""
	I1213 10:38:43.363262  947325 logs.go:282] 0 containers: []
	W1213 10:38:43.363269  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:38:43.363274  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:38:43.363331  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:38:43.388331  947325 cri.go:89] found id: ""
	I1213 10:38:43.388346  947325 logs.go:282] 0 containers: []
	W1213 10:38:43.388353  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:38:43.388361  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:38:43.388371  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:38:43.456040  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:38:43.448208   14240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:43.448885   14240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:43.450561   14240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:43.451211   14240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:43.452293   14240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:38:43.448208   14240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:43.448885   14240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:43.450561   14240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:43.451211   14240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:43.452293   14240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:38:43.456051  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:38:43.456062  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:38:43.529676  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:38:43.529697  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:38:43.557667  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:38:43.557683  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:38:43.626256  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:38:43.626276  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:38:46.141151  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:46.151629  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:38:46.151691  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:38:46.177078  947325 cri.go:89] found id: ""
	I1213 10:38:46.177092  947325 logs.go:282] 0 containers: []
	W1213 10:38:46.177099  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:38:46.177104  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:38:46.177163  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:38:46.203681  947325 cri.go:89] found id: ""
	I1213 10:38:46.203695  947325 logs.go:282] 0 containers: []
	W1213 10:38:46.203702  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:38:46.203707  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:38:46.203765  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:38:46.228801  947325 cri.go:89] found id: ""
	I1213 10:38:46.228815  947325 logs.go:282] 0 containers: []
	W1213 10:38:46.228823  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:38:46.228828  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:38:46.228892  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:38:46.254742  947325 cri.go:89] found id: ""
	I1213 10:38:46.254756  947325 logs.go:282] 0 containers: []
	W1213 10:38:46.254763  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:38:46.254768  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:38:46.254825  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:38:46.286504  947325 cri.go:89] found id: ""
	I1213 10:38:46.286522  947325 logs.go:282] 0 containers: []
	W1213 10:38:46.286529  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:38:46.286534  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:38:46.286596  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:38:46.311507  947325 cri.go:89] found id: ""
	I1213 10:38:46.311523  947325 logs.go:282] 0 containers: []
	W1213 10:38:46.311531  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:38:46.311536  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:38:46.311599  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:38:46.340455  947325 cri.go:89] found id: ""
	I1213 10:38:46.340469  947325 logs.go:282] 0 containers: []
	W1213 10:38:46.340477  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:38:46.340496  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:38:46.340508  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:38:46.410798  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:38:46.410817  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:38:46.425740  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:38:46.425758  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:38:46.488528  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:38:46.479589   14352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:46.480382   14352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:46.482285   14352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:46.482891   14352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:46.484595   14352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:38:46.479589   14352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:46.480382   14352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:46.482285   14352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:46.482891   14352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:46.484595   14352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:38:46.488537  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:38:46.488549  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:38:46.558649  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:38:46.558668  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:38:49.089125  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:49.099199  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:38:49.099261  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:38:49.128242  947325 cri.go:89] found id: ""
	I1213 10:38:49.128256  947325 logs.go:282] 0 containers: []
	W1213 10:38:49.128263  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:38:49.128268  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:38:49.128328  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:38:49.154103  947325 cri.go:89] found id: ""
	I1213 10:38:49.154117  947325 logs.go:282] 0 containers: []
	W1213 10:38:49.154124  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:38:49.154129  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:38:49.154189  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:38:49.178738  947325 cri.go:89] found id: ""
	I1213 10:38:49.178754  947325 logs.go:282] 0 containers: []
	W1213 10:38:49.178762  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:38:49.178767  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:38:49.178824  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:38:49.203209  947325 cri.go:89] found id: ""
	I1213 10:38:49.203223  947325 logs.go:282] 0 containers: []
	W1213 10:38:49.203230  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:38:49.203235  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:38:49.203290  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:38:49.228158  947325 cri.go:89] found id: ""
	I1213 10:38:49.228174  947325 logs.go:282] 0 containers: []
	W1213 10:38:49.228181  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:38:49.228186  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:38:49.228245  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:38:49.257410  947325 cri.go:89] found id: ""
	I1213 10:38:49.257425  947325 logs.go:282] 0 containers: []
	W1213 10:38:49.257432  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:38:49.257437  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:38:49.257503  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:38:49.284405  947325 cri.go:89] found id: ""
	I1213 10:38:49.284419  947325 logs.go:282] 0 containers: []
	W1213 10:38:49.284428  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:38:49.284436  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:38:49.284447  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:38:49.350814  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:38:49.350834  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:38:49.365897  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:38:49.365914  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:38:49.428434  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:38:49.419689   14458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:49.420411   14458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:49.422194   14458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:49.422785   14458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:49.424440   14458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:38:49.419689   14458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:49.420411   14458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:49.422194   14458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:49.422785   14458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:49.424440   14458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:38:49.428445  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:38:49.428455  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:38:49.497319  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:38:49.497338  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:38:52.026790  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:52.037493  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:38:52.037629  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:38:52.065928  947325 cri.go:89] found id: ""
	I1213 10:38:52.065942  947325 logs.go:282] 0 containers: []
	W1213 10:38:52.065959  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:38:52.065966  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:38:52.066030  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:38:52.093348  947325 cri.go:89] found id: ""
	I1213 10:38:52.093377  947325 logs.go:282] 0 containers: []
	W1213 10:38:52.093385  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:38:52.093391  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:38:52.093461  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:38:52.120408  947325 cri.go:89] found id: ""
	I1213 10:38:52.120438  947325 logs.go:282] 0 containers: []
	W1213 10:38:52.120446  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:38:52.120451  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:38:52.120520  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:38:52.151619  947325 cri.go:89] found id: ""
	I1213 10:38:52.151633  947325 logs.go:282] 0 containers: []
	W1213 10:38:52.151640  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:38:52.151645  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:38:52.151709  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:38:52.181293  947325 cri.go:89] found id: ""
	I1213 10:38:52.181307  947325 logs.go:282] 0 containers: []
	W1213 10:38:52.181314  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:38:52.181319  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:38:52.181381  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:38:52.207056  947325 cri.go:89] found id: ""
	I1213 10:38:52.207073  947325 logs.go:282] 0 containers: []
	W1213 10:38:52.207080  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:38:52.207085  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:38:52.207144  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:38:52.232482  947325 cri.go:89] found id: ""
	I1213 10:38:52.232495  947325 logs.go:282] 0 containers: []
	W1213 10:38:52.232503  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:38:52.232511  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:38:52.232523  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:38:52.298884  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:38:52.298908  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:38:52.314165  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:38:52.314184  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:38:52.379432  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:38:52.370728   14563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:52.371164   14563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:52.372944   14563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:52.373396   14563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:52.375062   14563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:38:52.370728   14563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:52.371164   14563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:52.372944   14563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:52.373396   14563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:52.375062   14563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:38:52.379442  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:38:52.379454  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:38:52.447720  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:38:52.447739  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:38:54.981781  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:54.994265  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:38:54.994331  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:38:55.034512  947325 cri.go:89] found id: ""
	I1213 10:38:55.034527  947325 logs.go:282] 0 containers: []
	W1213 10:38:55.034535  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:38:55.034541  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:38:55.034603  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:38:55.064371  947325 cri.go:89] found id: ""
	I1213 10:38:55.064385  947325 logs.go:282] 0 containers: []
	W1213 10:38:55.064393  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:38:55.064399  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:38:55.064464  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:38:55.094614  947325 cri.go:89] found id: ""
	I1213 10:38:55.094628  947325 logs.go:282] 0 containers: []
	W1213 10:38:55.094635  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:38:55.094640  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:38:55.094703  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:38:55.122445  947325 cri.go:89] found id: ""
	I1213 10:38:55.122469  947325 logs.go:282] 0 containers: []
	W1213 10:38:55.122476  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:38:55.122482  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:38:55.122565  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:38:55.149483  947325 cri.go:89] found id: ""
	I1213 10:38:55.149497  947325 logs.go:282] 0 containers: []
	W1213 10:38:55.149505  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:38:55.149510  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:38:55.149608  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:38:55.177190  947325 cri.go:89] found id: ""
	I1213 10:38:55.177204  947325 logs.go:282] 0 containers: []
	W1213 10:38:55.177211  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:38:55.177216  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:38:55.177276  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:38:55.205792  947325 cri.go:89] found id: ""
	I1213 10:38:55.205805  947325 logs.go:282] 0 containers: []
	W1213 10:38:55.205813  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:38:55.205820  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:38:55.205831  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:38:55.274521  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:38:55.274543  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:38:55.303850  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:38:55.303867  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:38:55.372053  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:38:55.372072  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:38:55.386741  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:38:55.386757  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:38:55.453760  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:38:55.443866   14678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:55.444485   14678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:55.446205   14678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:55.448348   14678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:55.448876   14678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:38:55.443866   14678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:55.444485   14678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:55.446205   14678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:55.448348   14678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:55.448876   14678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:38:57.954020  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:57.964050  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:38:57.964109  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:38:57.998468  947325 cri.go:89] found id: ""
	I1213 10:38:57.998484  947325 logs.go:282] 0 containers: []
	W1213 10:38:57.998492  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:38:57.998497  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:38:57.998564  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:38:58.035565  947325 cri.go:89] found id: ""
	I1213 10:38:58.035580  947325 logs.go:282] 0 containers: []
	W1213 10:38:58.035587  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:38:58.035592  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:38:58.035654  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:38:58.066882  947325 cri.go:89] found id: ""
	I1213 10:38:58.066903  947325 logs.go:282] 0 containers: []
	W1213 10:38:58.066912  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:38:58.066917  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:38:58.066978  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:38:58.092977  947325 cri.go:89] found id: ""
	I1213 10:38:58.093007  947325 logs.go:282] 0 containers: []
	W1213 10:38:58.093014  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:38:58.093019  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:38:58.093088  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:38:58.123222  947325 cri.go:89] found id: ""
	I1213 10:38:58.123235  947325 logs.go:282] 0 containers: []
	W1213 10:38:58.123243  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:38:58.123248  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:38:58.123311  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:38:58.148191  947325 cri.go:89] found id: ""
	I1213 10:38:58.148204  947325 logs.go:282] 0 containers: []
	W1213 10:38:58.148211  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:38:58.148226  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:38:58.148283  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:38:58.174245  947325 cri.go:89] found id: ""
	I1213 10:38:58.174259  947325 logs.go:282] 0 containers: []
	W1213 10:38:58.174266  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:38:58.174274  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:38:58.174286  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:38:58.238353  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:38:58.230226   14764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:58.230884   14764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:58.232487   14764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:58.232939   14764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:58.234404   14764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:38:58.230226   14764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:58.230884   14764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:58.232487   14764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:58.232939   14764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:58.234404   14764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:38:58.238363  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:38:58.238374  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:38:58.310390  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:38:58.310414  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:38:58.339218  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:38:58.339235  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:38:58.411033  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:38:58.411053  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:00.926322  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:00.937217  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:00.937279  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:00.963631  947325 cri.go:89] found id: ""
	I1213 10:39:00.963645  947325 logs.go:282] 0 containers: []
	W1213 10:39:00.963653  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:00.963658  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:39:00.963720  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:00.992312  947325 cri.go:89] found id: ""
	I1213 10:39:00.992327  947325 logs.go:282] 0 containers: []
	W1213 10:39:00.992334  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:39:00.992340  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:39:00.992402  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:01.019653  947325 cri.go:89] found id: ""
	I1213 10:39:01.019667  947325 logs.go:282] 0 containers: []
	W1213 10:39:01.019674  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:39:01.019679  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:01.019737  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:01.048197  947325 cri.go:89] found id: ""
	I1213 10:39:01.048211  947325 logs.go:282] 0 containers: []
	W1213 10:39:01.048218  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:01.048224  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:01.048278  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:01.077274  947325 cri.go:89] found id: ""
	I1213 10:39:01.077288  947325 logs.go:282] 0 containers: []
	W1213 10:39:01.077296  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:01.077301  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:01.077359  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:01.102210  947325 cri.go:89] found id: ""
	I1213 10:39:01.102225  947325 logs.go:282] 0 containers: []
	W1213 10:39:01.102232  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:01.102237  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:01.102296  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:01.127343  947325 cri.go:89] found id: ""
	I1213 10:39:01.127357  947325 logs.go:282] 0 containers: []
	W1213 10:39:01.127364  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:01.127372  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:01.127384  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:01.193045  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:01.184559   14862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:01.185631   14862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:01.186444   14862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:01.187426   14862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:01.187971   14862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:01.184559   14862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:01.185631   14862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:01.186444   14862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:01.187426   14862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:01.187971   14862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:01.193056  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:39:01.193066  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:39:01.263652  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:39:01.263672  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:01.300661  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:01.300679  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:01.369051  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:01.369070  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:03.885575  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:03.895834  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:03.895898  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:03.926313  947325 cri.go:89] found id: ""
	I1213 10:39:03.926327  947325 logs.go:282] 0 containers: []
	W1213 10:39:03.926335  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:03.926339  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:39:03.926396  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:03.954240  947325 cri.go:89] found id: ""
	I1213 10:39:03.954254  947325 logs.go:282] 0 containers: []
	W1213 10:39:03.954261  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:39:03.954266  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:39:03.954324  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:03.984134  947325 cri.go:89] found id: ""
	I1213 10:39:03.984148  947325 logs.go:282] 0 containers: []
	W1213 10:39:03.984154  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:39:03.984159  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:03.984224  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:04.016879  947325 cri.go:89] found id: ""
	I1213 10:39:04.016894  947325 logs.go:282] 0 containers: []
	W1213 10:39:04.016901  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:04.016906  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:04.016965  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:04.041176  947325 cri.go:89] found id: ""
	I1213 10:39:04.041190  947325 logs.go:282] 0 containers: []
	W1213 10:39:04.041203  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:04.041208  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:04.041267  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:04.066331  947325 cri.go:89] found id: ""
	I1213 10:39:04.066345  947325 logs.go:282] 0 containers: []
	W1213 10:39:04.066351  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:04.066357  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:04.066415  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:04.090857  947325 cri.go:89] found id: ""
	I1213 10:39:04.090886  947325 logs.go:282] 0 containers: []
	W1213 10:39:04.090895  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:04.090903  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:04.090917  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:04.156570  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:04.156590  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:04.171387  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:04.171404  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:04.240263  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:04.226379   14974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:04.227108   14974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:04.228895   14974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:04.229425   14974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:04.230990   14974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:04.226379   14974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:04.227108   14974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:04.228895   14974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:04.229425   14974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:04.230990   14974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:04.240273  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:39:04.240285  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:39:04.319651  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:39:04.319672  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:06.852882  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:06.864121  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:06.864186  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:06.890733  947325 cri.go:89] found id: ""
	I1213 10:39:06.890748  947325 logs.go:282] 0 containers: []
	W1213 10:39:06.890756  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:06.890761  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:39:06.890819  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:06.917207  947325 cri.go:89] found id: ""
	I1213 10:39:06.917222  947325 logs.go:282] 0 containers: []
	W1213 10:39:06.917228  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:39:06.917234  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:39:06.917291  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:06.943186  947325 cri.go:89] found id: ""
	I1213 10:39:06.943201  947325 logs.go:282] 0 containers: []
	W1213 10:39:06.943208  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:39:06.943213  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:06.943278  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:06.973557  947325 cri.go:89] found id: ""
	I1213 10:39:06.973571  947325 logs.go:282] 0 containers: []
	W1213 10:39:06.973579  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:06.973584  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:06.973641  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:07.004748  947325 cri.go:89] found id: ""
	I1213 10:39:07.004770  947325 logs.go:282] 0 containers: []
	W1213 10:39:07.004778  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:07.004783  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:07.004851  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:07.030997  947325 cri.go:89] found id: ""
	I1213 10:39:07.031011  947325 logs.go:282] 0 containers: []
	W1213 10:39:07.031019  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:07.031024  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:07.031080  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:07.055983  947325 cri.go:89] found id: ""
	I1213 10:39:07.055997  947325 logs.go:282] 0 containers: []
	W1213 10:39:07.056004  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:07.056012  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:39:07.056024  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:07.084902  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:07.084919  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:07.153213  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:07.153232  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:07.168429  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:07.168446  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:07.232563  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:07.223603   15088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:07.224430   15088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:07.226089   15088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:07.226414   15088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:07.227903   15088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:07.223603   15088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:07.224430   15088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:07.226089   15088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:07.226414   15088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:07.227903   15088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:07.232586  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:39:07.232598  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:39:09.804561  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:09.814452  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:09.814514  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:09.840080  947325 cri.go:89] found id: ""
	I1213 10:39:09.840093  947325 logs.go:282] 0 containers: []
	W1213 10:39:09.840101  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:09.840106  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:39:09.840170  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:09.864603  947325 cri.go:89] found id: ""
	I1213 10:39:09.864617  947325 logs.go:282] 0 containers: []
	W1213 10:39:09.864625  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:39:09.864630  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:39:09.864697  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:09.889079  947325 cri.go:89] found id: ""
	I1213 10:39:09.889093  947325 logs.go:282] 0 containers: []
	W1213 10:39:09.889101  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:39:09.889106  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:09.889162  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:09.915869  947325 cri.go:89] found id: ""
	I1213 10:39:09.915883  947325 logs.go:282] 0 containers: []
	W1213 10:39:09.915890  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:09.915895  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:09.915954  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:09.945590  947325 cri.go:89] found id: ""
	I1213 10:39:09.945603  947325 logs.go:282] 0 containers: []
	W1213 10:39:09.945610  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:09.945618  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:09.945678  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:09.971712  947325 cri.go:89] found id: ""
	I1213 10:39:09.971725  947325 logs.go:282] 0 containers: []
	W1213 10:39:09.971732  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:09.971737  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:09.971798  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:10.003581  947325 cri.go:89] found id: ""
	I1213 10:39:10.003600  947325 logs.go:282] 0 containers: []
	W1213 10:39:10.003608  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:10.003618  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:39:10.003633  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:39:10.077821  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:39:10.077842  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:10.108375  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:10.108392  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:10.178400  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:10.178420  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:10.193608  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:10.193647  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:10.270772  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:10.262269   15196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:10.263276   15196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:10.265019   15196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:10.265328   15196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:10.266816   15196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:10.262269   15196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:10.263276   15196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:10.265019   15196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:10.265328   15196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:10.266816   15196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:12.771904  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:12.782049  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:12.782110  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:12.806673  947325 cri.go:89] found id: ""
	I1213 10:39:12.806687  947325 logs.go:282] 0 containers: []
	W1213 10:39:12.806695  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:12.806700  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:39:12.806757  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:12.835814  947325 cri.go:89] found id: ""
	I1213 10:39:12.835829  947325 logs.go:282] 0 containers: []
	W1213 10:39:12.835836  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:39:12.835841  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:39:12.835898  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:12.861712  947325 cri.go:89] found id: ""
	I1213 10:39:12.861727  947325 logs.go:282] 0 containers: []
	W1213 10:39:12.861734  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:39:12.861740  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:12.861804  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:12.886652  947325 cri.go:89] found id: ""
	I1213 10:39:12.886666  947325 logs.go:282] 0 containers: []
	W1213 10:39:12.886673  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:12.886678  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:12.886736  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:12.916010  947325 cri.go:89] found id: ""
	I1213 10:39:12.916025  947325 logs.go:282] 0 containers: []
	W1213 10:39:12.916032  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:12.916037  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:12.916100  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:12.946655  947325 cri.go:89] found id: ""
	I1213 10:39:12.946672  947325 logs.go:282] 0 containers: []
	W1213 10:39:12.946679  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:12.946684  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:12.946748  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:12.976684  947325 cri.go:89] found id: ""
	I1213 10:39:12.976698  947325 logs.go:282] 0 containers: []
	W1213 10:39:12.976705  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:12.976713  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:12.976726  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:13.043449  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:13.043472  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:13.059281  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:13.059299  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:13.122969  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:13.114879   15289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:13.115451   15289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:13.117021   15289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:13.117507   15289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:13.119078   15289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:13.114879   15289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:13.115451   15289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:13.117021   15289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:13.117507   15289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:13.119078   15289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:13.122981  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:39:13.122991  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:39:13.193301  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:39:13.193322  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:15.728135  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:15.739049  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:15.739110  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:15.764321  947325 cri.go:89] found id: ""
	I1213 10:39:15.764335  947325 logs.go:282] 0 containers: []
	W1213 10:39:15.764342  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:15.764348  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:39:15.764410  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:15.794053  947325 cri.go:89] found id: ""
	I1213 10:39:15.794068  947325 logs.go:282] 0 containers: []
	W1213 10:39:15.794077  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:39:15.794083  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:39:15.794138  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:15.819708  947325 cri.go:89] found id: ""
	I1213 10:39:15.819721  947325 logs.go:282] 0 containers: []
	W1213 10:39:15.819729  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:39:15.819734  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:15.819793  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:15.850534  947325 cri.go:89] found id: ""
	I1213 10:39:15.850548  947325 logs.go:282] 0 containers: []
	W1213 10:39:15.850556  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:15.850561  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:15.850618  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:15.879609  947325 cri.go:89] found id: ""
	I1213 10:39:15.879623  947325 logs.go:282] 0 containers: []
	W1213 10:39:15.879631  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:15.879636  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:15.879700  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:15.908873  947325 cri.go:89] found id: ""
	I1213 10:39:15.908887  947325 logs.go:282] 0 containers: []
	W1213 10:39:15.908895  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:15.908901  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:15.908967  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:15.936537  947325 cri.go:89] found id: ""
	I1213 10:39:15.936552  947325 logs.go:282] 0 containers: []
	W1213 10:39:15.936559  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:15.936567  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:39:15.936580  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:39:16.005668  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:39:16.005690  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:16.036804  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:16.036822  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:16.105762  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:16.105780  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:16.121830  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:16.121849  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:16.189324  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:16.180755   15407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:16.181397   15407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:16.183115   15407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:16.183776   15407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:16.185271   15407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:16.180755   15407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:16.181397   15407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:16.183115   15407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:16.183776   15407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:16.185271   15407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:18.689610  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:18.699729  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:18.699788  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:18.725083  947325 cri.go:89] found id: ""
	I1213 10:39:18.725097  947325 logs.go:282] 0 containers: []
	W1213 10:39:18.725105  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:18.725110  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:39:18.725165  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:18.751300  947325 cri.go:89] found id: ""
	I1213 10:39:18.751315  947325 logs.go:282] 0 containers: []
	W1213 10:39:18.751327  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:39:18.751333  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:39:18.751390  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:18.776458  947325 cri.go:89] found id: ""
	I1213 10:39:18.776473  947325 logs.go:282] 0 containers: []
	W1213 10:39:18.776480  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:39:18.776485  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:18.776543  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:18.801403  947325 cri.go:89] found id: ""
	I1213 10:39:18.801416  947325 logs.go:282] 0 containers: []
	W1213 10:39:18.801423  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:18.801428  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:18.801488  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:18.828035  947325 cri.go:89] found id: ""
	I1213 10:39:18.828053  947325 logs.go:282] 0 containers: []
	W1213 10:39:18.828060  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:18.828065  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:18.828122  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:18.852563  947325 cri.go:89] found id: ""
	I1213 10:39:18.852577  947325 logs.go:282] 0 containers: []
	W1213 10:39:18.852583  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:18.852589  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:18.852647  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:18.879882  947325 cri.go:89] found id: ""
	I1213 10:39:18.879897  947325 logs.go:282] 0 containers: []
	W1213 10:39:18.879904  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:18.879912  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:39:18.879922  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:18.913762  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:18.913788  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:18.978817  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:18.978840  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:18.994917  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:18.994936  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:19.062190  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:19.054243   15510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:19.054818   15510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:19.056322   15510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:19.056831   15510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:19.058280   15510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:19.054243   15510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:19.054818   15510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:19.056322   15510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:19.056831   15510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:19.058280   15510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:19.062201  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:39:19.062213  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:39:21.629331  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:21.639522  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:21.639593  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:21.664074  947325 cri.go:89] found id: ""
	I1213 10:39:21.664089  947325 logs.go:282] 0 containers: []
	W1213 10:39:21.664097  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:21.664102  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:39:21.664164  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:21.689123  947325 cri.go:89] found id: ""
	I1213 10:39:21.689136  947325 logs.go:282] 0 containers: []
	W1213 10:39:21.689144  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:39:21.689149  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:39:21.689206  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:21.713736  947325 cri.go:89] found id: ""
	I1213 10:39:21.713750  947325 logs.go:282] 0 containers: []
	W1213 10:39:21.713758  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:39:21.713762  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:21.713817  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:21.741978  947325 cri.go:89] found id: ""
	I1213 10:39:21.741991  947325 logs.go:282] 0 containers: []
	W1213 10:39:21.741999  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:21.742004  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:21.742063  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:21.767443  947325 cri.go:89] found id: ""
	I1213 10:39:21.767458  947325 logs.go:282] 0 containers: []
	W1213 10:39:21.767464  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:21.767469  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:21.767526  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:21.792419  947325 cri.go:89] found id: ""
	I1213 10:39:21.792434  947325 logs.go:282] 0 containers: []
	W1213 10:39:21.792457  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:21.792463  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:21.792529  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:21.821837  947325 cri.go:89] found id: ""
	I1213 10:39:21.821851  947325 logs.go:282] 0 containers: []
	W1213 10:39:21.821859  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:21.821867  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:21.821878  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:21.836299  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:21.836315  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:21.902625  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:21.894040   15602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:21.894485   15602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:21.896277   15602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:21.897017   15602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:21.898534   15602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:21.894040   15602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:21.894485   15602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:21.896277   15602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:21.897017   15602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:21.898534   15602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:21.902635  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:39:21.902646  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:39:21.971184  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:39:21.971204  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:22.003828  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:22.003847  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:24.576083  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:24.587706  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:24.587784  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:24.613620  947325 cri.go:89] found id: ""
	I1213 10:39:24.613635  947325 logs.go:282] 0 containers: []
	W1213 10:39:24.613643  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:24.613648  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:39:24.613706  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:24.639792  947325 cri.go:89] found id: ""
	I1213 10:39:24.639807  947325 logs.go:282] 0 containers: []
	W1213 10:39:24.639814  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:39:24.639820  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:39:24.639897  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:24.664551  947325 cri.go:89] found id: ""
	I1213 10:39:24.664566  947325 logs.go:282] 0 containers: []
	W1213 10:39:24.664573  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:39:24.664578  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:24.664638  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:24.689748  947325 cri.go:89] found id: ""
	I1213 10:39:24.689762  947325 logs.go:282] 0 containers: []
	W1213 10:39:24.689769  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:24.689774  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:24.689831  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:24.718617  947325 cri.go:89] found id: ""
	I1213 10:39:24.718632  947325 logs.go:282] 0 containers: []
	W1213 10:39:24.718639  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:24.718645  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:24.718702  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:24.748026  947325 cri.go:89] found id: ""
	I1213 10:39:24.748040  947325 logs.go:282] 0 containers: []
	W1213 10:39:24.748047  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:24.748052  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:24.748117  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:24.774049  947325 cri.go:89] found id: ""
	I1213 10:39:24.774063  947325 logs.go:282] 0 containers: []
	W1213 10:39:24.774070  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:24.774084  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:24.774095  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:24.840008  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:24.840029  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:24.855570  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:24.855587  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:24.924254  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:24.915904   15706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:24.916383   15706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:24.918059   15706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:24.918622   15706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:24.920297   15706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:24.915904   15706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:24.916383   15706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:24.918059   15706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:24.918622   15706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:24.920297   15706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:24.924266  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:39:24.924276  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:39:24.993620  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:39:24.993639  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:27.529665  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:27.539536  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:27.539597  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:27.564505  947325 cri.go:89] found id: ""
	I1213 10:39:27.564519  947325 logs.go:282] 0 containers: []
	W1213 10:39:27.564526  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:27.564531  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:39:27.564591  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:27.590383  947325 cri.go:89] found id: ""
	I1213 10:39:27.590397  947325 logs.go:282] 0 containers: []
	W1213 10:39:27.590405  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:39:27.590410  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:39:27.590474  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:27.615895  947325 cri.go:89] found id: ""
	I1213 10:39:27.615909  947325 logs.go:282] 0 containers: []
	W1213 10:39:27.615916  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:39:27.615921  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:27.615979  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:27.647656  947325 cri.go:89] found id: ""
	I1213 10:39:27.647670  947325 logs.go:282] 0 containers: []
	W1213 10:39:27.647678  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:27.647683  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:27.647741  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:27.673365  947325 cri.go:89] found id: ""
	I1213 10:39:27.673379  947325 logs.go:282] 0 containers: []
	W1213 10:39:27.673385  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:27.673390  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:27.673448  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:27.698006  947325 cri.go:89] found id: ""
	I1213 10:39:27.698020  947325 logs.go:282] 0 containers: []
	W1213 10:39:27.698028  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:27.698033  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:27.698096  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:27.722664  947325 cri.go:89] found id: ""
	I1213 10:39:27.722688  947325 logs.go:282] 0 containers: []
	W1213 10:39:27.722695  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:27.722702  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:27.722713  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:27.793605  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:27.793629  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:27.808404  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:27.808420  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:27.875877  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:27.866856   15813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:27.867426   15813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:27.869149   15813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:27.869660   15813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:27.871392   15813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:27.866856   15813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:27.867426   15813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:27.869149   15813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:27.869660   15813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:27.871392   15813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:27.875886  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:39:27.875898  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:39:27.944703  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:39:27.944723  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:30.475788  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:30.486929  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:30.486993  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:30.523816  947325 cri.go:89] found id: ""
	I1213 10:39:30.523830  947325 logs.go:282] 0 containers: []
	W1213 10:39:30.523837  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:30.523843  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:39:30.523899  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:30.556573  947325 cri.go:89] found id: ""
	I1213 10:39:30.556586  947325 logs.go:282] 0 containers: []
	W1213 10:39:30.556593  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:39:30.556598  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:39:30.556666  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:30.581886  947325 cri.go:89] found id: ""
	I1213 10:39:30.581900  947325 logs.go:282] 0 containers: []
	W1213 10:39:30.581907  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:39:30.581912  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:30.581972  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:30.611853  947325 cri.go:89] found id: ""
	I1213 10:39:30.611878  947325 logs.go:282] 0 containers: []
	W1213 10:39:30.611886  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:30.611891  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:30.611959  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:30.636125  947325 cri.go:89] found id: ""
	I1213 10:39:30.636140  947325 logs.go:282] 0 containers: []
	W1213 10:39:30.636147  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:30.636152  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:30.636213  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:30.661404  947325 cri.go:89] found id: ""
	I1213 10:39:30.661418  947325 logs.go:282] 0 containers: []
	W1213 10:39:30.661425  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:30.661430  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:30.661490  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:30.686369  947325 cri.go:89] found id: ""
	I1213 10:39:30.686382  947325 logs.go:282] 0 containers: []
	W1213 10:39:30.686390  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:30.686397  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:30.686408  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:30.752100  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:30.752120  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:30.766471  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:30.766487  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:30.831347  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:30.823244   15917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:30.823892   15917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:30.825523   15917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:30.826100   15917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:30.827547   15917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:30.823244   15917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:30.823892   15917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:30.825523   15917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:30.826100   15917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:30.827547   15917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:30.831356  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:39:30.831367  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:39:30.899699  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:39:30.899718  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:33.428636  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:33.438752  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:33.438815  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:33.467201  947325 cri.go:89] found id: ""
	I1213 10:39:33.467215  947325 logs.go:282] 0 containers: []
	W1213 10:39:33.467222  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:33.467227  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:39:33.467285  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:33.496554  947325 cri.go:89] found id: ""
	I1213 10:39:33.496570  947325 logs.go:282] 0 containers: []
	W1213 10:39:33.496577  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:39:33.496582  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:39:33.496650  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:33.523431  947325 cri.go:89] found id: ""
	I1213 10:39:33.523446  947325 logs.go:282] 0 containers: []
	W1213 10:39:33.523453  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:39:33.523457  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:33.523517  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:33.559332  947325 cri.go:89] found id: ""
	I1213 10:39:33.559346  947325 logs.go:282] 0 containers: []
	W1213 10:39:33.559353  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:33.559358  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:33.559413  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:33.587632  947325 cri.go:89] found id: ""
	I1213 10:39:33.587645  947325 logs.go:282] 0 containers: []
	W1213 10:39:33.587653  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:33.587658  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:33.587714  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:33.612223  947325 cri.go:89] found id: ""
	I1213 10:39:33.612237  947325 logs.go:282] 0 containers: []
	W1213 10:39:33.612266  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:33.612271  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:33.612339  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:33.636321  947325 cri.go:89] found id: ""
	I1213 10:39:33.636344  947325 logs.go:282] 0 containers: []
	W1213 10:39:33.636351  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:33.636359  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:33.636373  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:33.650977  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:33.650993  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:33.710121  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:33.702522   16018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:33.703286   16018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:33.704577   16018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:33.705062   16018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:33.706497   16018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:33.702522   16018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:33.703286   16018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:33.704577   16018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:33.705062   16018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:33.706497   16018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:33.710132  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:39:33.710143  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:39:33.781081  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:39:33.781101  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:33.810866  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:33.810882  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:36.380753  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:36.390598  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:36.390659  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:36.416068  947325 cri.go:89] found id: ""
	I1213 10:39:36.416083  947325 logs.go:282] 0 containers: []
	W1213 10:39:36.416090  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:36.416097  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:39:36.416156  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:36.443939  947325 cri.go:89] found id: ""
	I1213 10:39:36.443954  947325 logs.go:282] 0 containers: []
	W1213 10:39:36.443968  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:39:36.443973  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:39:36.444031  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:36.468690  947325 cri.go:89] found id: ""
	I1213 10:39:36.468704  947325 logs.go:282] 0 containers: []
	W1213 10:39:36.468711  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:39:36.468716  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:36.468772  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:36.498941  947325 cri.go:89] found id: ""
	I1213 10:39:36.498955  947325 logs.go:282] 0 containers: []
	W1213 10:39:36.498962  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:36.498967  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:36.499033  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:36.538080  947325 cri.go:89] found id: ""
	I1213 10:39:36.538103  947325 logs.go:282] 0 containers: []
	W1213 10:39:36.538111  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:36.538116  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:36.538179  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:36.563132  947325 cri.go:89] found id: ""
	I1213 10:39:36.563147  947325 logs.go:282] 0 containers: []
	W1213 10:39:36.563154  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:36.563160  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:36.563217  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:36.588756  947325 cri.go:89] found id: ""
	I1213 10:39:36.588780  947325 logs.go:282] 0 containers: []
	W1213 10:39:36.588789  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:36.588797  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:36.588812  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:36.653330  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:36.653350  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:36.670404  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:36.670421  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:36.742327  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:36.732861   16126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:36.733828   16126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:36.734900   16126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:36.736542   16126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:36.737273   16126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:36.732861   16126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:36.733828   16126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:36.734900   16126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:36.736542   16126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:36.737273   16126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:36.742339  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:39:36.742350  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:39:36.811143  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:39:36.811163  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:39.339643  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:39.349836  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:39.349897  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:39.375161  947325 cri.go:89] found id: ""
	I1213 10:39:39.375175  947325 logs.go:282] 0 containers: []
	W1213 10:39:39.375194  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:39.375200  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:39:39.375262  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:39.400364  947325 cri.go:89] found id: ""
	I1213 10:39:39.400393  947325 logs.go:282] 0 containers: []
	W1213 10:39:39.400402  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:39:39.400407  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:39:39.400473  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:39.427167  947325 cri.go:89] found id: ""
	I1213 10:39:39.427182  947325 logs.go:282] 0 containers: []
	W1213 10:39:39.427189  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:39:39.427195  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:39.427270  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:39.455933  947325 cri.go:89] found id: ""
	I1213 10:39:39.455960  947325 logs.go:282] 0 containers: []
	W1213 10:39:39.455967  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:39.455973  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:39.456041  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:39.489827  947325 cri.go:89] found id: ""
	I1213 10:39:39.489840  947325 logs.go:282] 0 containers: []
	W1213 10:39:39.489847  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:39.489852  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:39.489920  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:39.529776  947325 cri.go:89] found id: ""
	I1213 10:39:39.529790  947325 logs.go:282] 0 containers: []
	W1213 10:39:39.529797  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:39.529814  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:39.529890  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:39.558531  947325 cri.go:89] found id: ""
	I1213 10:39:39.558545  947325 logs.go:282] 0 containers: []
	W1213 10:39:39.558552  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:39.558560  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:39.558571  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:39.625366  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:39.625384  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:39.640509  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:39.640525  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:39.706928  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:39.697596   16228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:39.698528   16228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:39.700141   16228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:39.700637   16228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:39.702492   16228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:39.697596   16228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:39.698528   16228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:39.700141   16228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:39.700637   16228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:39.702492   16228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:39.706940  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:39:39.706952  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:39:39.779211  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:39:39.779231  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:42.308782  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:42.319639  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:42.319702  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:42.346935  947325 cri.go:89] found id: ""
	I1213 10:39:42.346959  947325 logs.go:282] 0 containers: []
	W1213 10:39:42.346970  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:42.346977  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:39:42.347038  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:42.378296  947325 cri.go:89] found id: ""
	I1213 10:39:42.378310  947325 logs.go:282] 0 containers: []
	W1213 10:39:42.378316  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:39:42.378321  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:39:42.378381  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:42.403824  947325 cri.go:89] found id: ""
	I1213 10:39:42.403839  947325 logs.go:282] 0 containers: []
	W1213 10:39:42.403845  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:39:42.403850  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:42.403919  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:42.429874  947325 cri.go:89] found id: ""
	I1213 10:39:42.429890  947325 logs.go:282] 0 containers: []
	W1213 10:39:42.429898  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:42.429905  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:42.429978  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:42.457189  947325 cri.go:89] found id: ""
	I1213 10:39:42.457203  947325 logs.go:282] 0 containers: []
	W1213 10:39:42.457211  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:42.457216  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:42.457277  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:42.485373  947325 cri.go:89] found id: ""
	I1213 10:39:42.485389  947325 logs.go:282] 0 containers: []
	W1213 10:39:42.485400  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:42.485429  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:42.485500  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:42.518706  947325 cri.go:89] found id: ""
	I1213 10:39:42.518720  947325 logs.go:282] 0 containers: []
	W1213 10:39:42.518728  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:42.518735  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:42.518746  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:42.534645  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:42.534662  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:42.606481  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:42.598265   16332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:42.599171   16332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:42.600937   16332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:42.601258   16332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:42.602752   16332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:42.598265   16332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:42.599171   16332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:42.600937   16332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:42.601258   16332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:42.602752   16332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:42.606491  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:39:42.606501  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:39:42.673511  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:39:42.673532  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:42.702426  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:42.702443  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:45.267475  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:45.280530  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:45.280751  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:45.313332  947325 cri.go:89] found id: ""
	I1213 10:39:45.313346  947325 logs.go:282] 0 containers: []
	W1213 10:39:45.313354  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:45.313359  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:39:45.313427  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:45.342213  947325 cri.go:89] found id: ""
	I1213 10:39:45.342227  947325 logs.go:282] 0 containers: []
	W1213 10:39:45.342234  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:39:45.342239  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:39:45.342297  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:45.371108  947325 cri.go:89] found id: ""
	I1213 10:39:45.371123  947325 logs.go:282] 0 containers: []
	W1213 10:39:45.371130  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:39:45.371137  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:45.371197  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:45.400706  947325 cri.go:89] found id: ""
	I1213 10:39:45.400720  947325 logs.go:282] 0 containers: []
	W1213 10:39:45.400728  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:45.400735  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:45.400805  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:45.428233  947325 cri.go:89] found id: ""
	I1213 10:39:45.428258  947325 logs.go:282] 0 containers: []
	W1213 10:39:45.428266  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:45.428271  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:45.428341  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:45.458995  947325 cri.go:89] found id: ""
	I1213 10:39:45.459010  947325 logs.go:282] 0 containers: []
	W1213 10:39:45.459017  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:45.459023  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:45.459081  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:45.494206  947325 cri.go:89] found id: ""
	I1213 10:39:45.494220  947325 logs.go:282] 0 containers: []
	W1213 10:39:45.494227  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:45.494235  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:39:45.494246  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:39:45.575280  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:39:45.575299  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:45.605803  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:45.605820  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:45.676085  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:45.676104  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:45.691072  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:45.691091  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:45.756808  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:45.747515   16453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:45.748188   16453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:45.750879   16453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:45.751438   16453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:45.752940   16453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:45.747515   16453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:45.748188   16453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:45.750879   16453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:45.751438   16453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:45.752940   16453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:48.257078  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:48.266893  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:48.266954  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:48.292251  947325 cri.go:89] found id: ""
	I1213 10:39:48.292265  947325 logs.go:282] 0 containers: []
	W1213 10:39:48.292272  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:48.292288  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:39:48.292345  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:48.318109  947325 cri.go:89] found id: ""
	I1213 10:39:48.318134  947325 logs.go:282] 0 containers: []
	W1213 10:39:48.318142  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:39:48.318147  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:39:48.318207  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:48.344874  947325 cri.go:89] found id: ""
	I1213 10:39:48.344888  947325 logs.go:282] 0 containers: []
	W1213 10:39:48.344896  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:39:48.344901  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:48.344966  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:48.372878  947325 cri.go:89] found id: ""
	I1213 10:39:48.372893  947325 logs.go:282] 0 containers: []
	W1213 10:39:48.372900  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:48.372906  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:48.372967  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:48.399505  947325 cri.go:89] found id: ""
	I1213 10:39:48.399517  947325 logs.go:282] 0 containers: []
	W1213 10:39:48.399525  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:48.399530  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:48.399591  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:48.426096  947325 cri.go:89] found id: ""
	I1213 10:39:48.426110  947325 logs.go:282] 0 containers: []
	W1213 10:39:48.426117  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:48.426123  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:48.426182  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:48.452372  947325 cri.go:89] found id: ""
	I1213 10:39:48.452387  947325 logs.go:282] 0 containers: []
	W1213 10:39:48.452394  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:48.452402  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:39:48.452413  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:39:48.535530  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:39:48.535558  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:48.565498  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:48.565516  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:48.638609  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:48.638630  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:48.653725  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:48.653743  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:48.724088  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:48.715285   16557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:48.715997   16557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:48.717752   16557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:48.718374   16557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:48.719911   16557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:48.715285   16557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:48.715997   16557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:48.717752   16557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:48.718374   16557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:48.719911   16557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:51.224632  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:51.234995  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:51.235060  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:51.260920  947325 cri.go:89] found id: ""
	I1213 10:39:51.260934  947325 logs.go:282] 0 containers: []
	W1213 10:39:51.260941  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:51.260946  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:39:51.261010  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:51.288308  947325 cri.go:89] found id: ""
	I1213 10:39:51.288323  947325 logs.go:282] 0 containers: []
	W1213 10:39:51.288330  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:39:51.288335  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:39:51.288395  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:51.313237  947325 cri.go:89] found id: ""
	I1213 10:39:51.313251  947325 logs.go:282] 0 containers: []
	W1213 10:39:51.313258  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:39:51.313263  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:51.313322  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:51.340832  947325 cri.go:89] found id: ""
	I1213 10:39:51.340845  947325 logs.go:282] 0 containers: []
	W1213 10:39:51.340852  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:51.340857  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:51.340913  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:51.367975  947325 cri.go:89] found id: ""
	I1213 10:39:51.367989  947325 logs.go:282] 0 containers: []
	W1213 10:39:51.367996  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:51.368000  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:51.368059  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:51.393715  947325 cri.go:89] found id: ""
	I1213 10:39:51.393728  947325 logs.go:282] 0 containers: []
	W1213 10:39:51.393736  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:51.393741  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:51.393803  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:51.422317  947325 cri.go:89] found id: ""
	I1213 10:39:51.422331  947325 logs.go:282] 0 containers: []
	W1213 10:39:51.422338  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:51.422345  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:39:51.422356  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:39:51.492559  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:39:51.492577  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:51.531769  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:51.531786  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:51.599294  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:51.599316  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:51.615318  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:51.615334  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:51.678990  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:51.669927   16661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:51.670629   16661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:51.672315   16661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:51.672978   16661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:51.674480   16661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:51.669927   16661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:51.670629   16661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:51.672315   16661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:51.672978   16661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:51.674480   16661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:54.180647  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:54.190751  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:54.190817  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:54.216105  947325 cri.go:89] found id: ""
	I1213 10:39:54.216119  947325 logs.go:282] 0 containers: []
	W1213 10:39:54.216126  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:54.216131  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:39:54.216188  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:54.245934  947325 cri.go:89] found id: ""
	I1213 10:39:54.245948  947325 logs.go:282] 0 containers: []
	W1213 10:39:54.245955  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:39:54.245960  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:39:54.246019  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:54.272786  947325 cri.go:89] found id: ""
	I1213 10:39:54.272800  947325 logs.go:282] 0 containers: []
	W1213 10:39:54.272807  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:39:54.272812  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:54.272871  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:54.298724  947325 cri.go:89] found id: ""
	I1213 10:39:54.298738  947325 logs.go:282] 0 containers: []
	W1213 10:39:54.298745  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:54.298750  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:54.298814  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:54.324500  947325 cri.go:89] found id: ""
	I1213 10:39:54.324514  947325 logs.go:282] 0 containers: []
	W1213 10:39:54.324522  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:54.324533  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:54.324647  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:54.351350  947325 cri.go:89] found id: ""
	I1213 10:39:54.351364  947325 logs.go:282] 0 containers: []
	W1213 10:39:54.351372  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:54.351377  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:54.351439  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:54.376698  947325 cri.go:89] found id: ""
	I1213 10:39:54.376712  947325 logs.go:282] 0 containers: []
	W1213 10:39:54.376720  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:54.376729  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:39:54.376740  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:54.408737  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:54.408753  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:54.475785  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:54.475805  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:54.498578  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:54.498595  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:54.571508  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:54.562841   16763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:54.563554   16763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:54.565208   16763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:54.565888   16763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:54.567536   16763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:54.562841   16763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:54.563554   16763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:54.565208   16763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:54.565888   16763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:54.567536   16763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:54.571518  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:39:54.571529  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:39:57.141570  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:57.151660  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:57.151725  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:57.177208  947325 cri.go:89] found id: ""
	I1213 10:39:57.177222  947325 logs.go:282] 0 containers: []
	W1213 10:39:57.177230  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:57.177235  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:39:57.177305  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:57.202689  947325 cri.go:89] found id: ""
	I1213 10:39:57.202703  947325 logs.go:282] 0 containers: []
	W1213 10:39:57.202710  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:39:57.202715  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:39:57.202778  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:57.227567  947325 cri.go:89] found id: ""
	I1213 10:39:57.227581  947325 logs.go:282] 0 containers: []
	W1213 10:39:57.227588  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:39:57.227593  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:57.227651  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:57.257034  947325 cri.go:89] found id: ""
	I1213 10:39:57.257048  947325 logs.go:282] 0 containers: []
	W1213 10:39:57.257056  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:57.257061  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:57.257118  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:57.282238  947325 cri.go:89] found id: ""
	I1213 10:39:57.282251  947325 logs.go:282] 0 containers: []
	W1213 10:39:57.282258  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:57.282263  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:57.282321  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:57.308327  947325 cri.go:89] found id: ""
	I1213 10:39:57.308341  947325 logs.go:282] 0 containers: []
	W1213 10:39:57.308348  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:57.308353  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:57.308412  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:57.334174  947325 cri.go:89] found id: ""
	I1213 10:39:57.334188  947325 logs.go:282] 0 containers: []
	W1213 10:39:57.334196  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:57.334203  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:39:57.334214  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:57.365982  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:57.365997  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:57.438986  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:57.439007  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:57.454096  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:57.454113  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:57.539317  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:57.526904   16863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:57.527801   16863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:57.529755   16863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:57.530529   16863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:57.532282   16863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:57.526904   16863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:57.527801   16863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:57.529755   16863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:57.530529   16863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:57.532282   16863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:57.539330  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:39:57.539341  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:40:00.111211  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:40:00.161991  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:40:00.162066  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:40:00.278257  947325 cri.go:89] found id: ""
	I1213 10:40:00.278273  947325 logs.go:282] 0 containers: []
	W1213 10:40:00.278282  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:40:00.278288  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:40:00.278371  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:40:00.356421  947325 cri.go:89] found id: ""
	I1213 10:40:00.356441  947325 logs.go:282] 0 containers: []
	W1213 10:40:00.356449  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:40:00.356459  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:40:00.356542  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:40:00.426855  947325 cri.go:89] found id: ""
	I1213 10:40:00.426872  947325 logs.go:282] 0 containers: []
	W1213 10:40:00.426880  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:40:00.426887  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:40:00.426962  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:40:00.484844  947325 cri.go:89] found id: ""
	I1213 10:40:00.484860  947325 logs.go:282] 0 containers: []
	W1213 10:40:00.484868  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:40:00.484874  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:40:00.484945  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:40:00.602425  947325 cri.go:89] found id: ""
	I1213 10:40:00.602444  947325 logs.go:282] 0 containers: []
	W1213 10:40:00.602452  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:40:00.602465  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:40:00.602545  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:40:00.682272  947325 cri.go:89] found id: ""
	I1213 10:40:00.682288  947325 logs.go:282] 0 containers: []
	W1213 10:40:00.682297  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:40:00.682303  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:40:00.682377  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:40:00.717455  947325 cri.go:89] found id: ""
	I1213 10:40:00.717470  947325 logs.go:282] 0 containers: []
	W1213 10:40:00.717478  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:40:00.717486  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:40:00.717498  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:40:00.751785  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:40:00.751805  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:40:00.823234  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:40:00.823256  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:40:00.840067  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:40:00.840092  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:40:00.911938  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:40:00.902907   16969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:00.903639   16969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:00.905343   16969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:00.905895   16969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:00.907562   16969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:40:00.902907   16969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:00.903639   16969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:00.905343   16969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:00.905895   16969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:00.907562   16969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:40:00.911995  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:40:00.912005  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:40:03.480277  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:40:03.490777  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:40:03.490839  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:40:03.516535  947325 cri.go:89] found id: ""
	I1213 10:40:03.516549  947325 logs.go:282] 0 containers: []
	W1213 10:40:03.516556  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:40:03.516561  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:40:03.516630  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:40:03.543061  947325 cri.go:89] found id: ""
	I1213 10:40:03.543075  947325 logs.go:282] 0 containers: []
	W1213 10:40:03.543083  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:40:03.543088  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:40:03.543149  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:40:03.569136  947325 cri.go:89] found id: ""
	I1213 10:40:03.569150  947325 logs.go:282] 0 containers: []
	W1213 10:40:03.569158  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:40:03.569163  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:40:03.569222  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:40:03.596417  947325 cri.go:89] found id: ""
	I1213 10:40:03.596431  947325 logs.go:282] 0 containers: []
	W1213 10:40:03.596438  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:40:03.596443  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:40:03.596510  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:40:03.624475  947325 cri.go:89] found id: ""
	I1213 10:40:03.624489  947325 logs.go:282] 0 containers: []
	W1213 10:40:03.624496  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:40:03.624501  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:40:03.624560  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:40:03.650480  947325 cri.go:89] found id: ""
	I1213 10:40:03.650495  947325 logs.go:282] 0 containers: []
	W1213 10:40:03.650509  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:40:03.650515  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:40:03.650574  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:40:03.679244  947325 cri.go:89] found id: ""
	I1213 10:40:03.679258  947325 logs.go:282] 0 containers: []
	W1213 10:40:03.679265  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:40:03.679272  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:40:03.679283  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:40:03.752004  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:40:03.742428   17052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:03.743353   17052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:03.744776   17052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:03.745390   17052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:03.747857   17052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:40:03.742428   17052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:03.743353   17052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:03.744776   17052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:03.745390   17052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:03.747857   17052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:40:03.752014  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:40:03.752025  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:40:03.833866  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:40:03.833888  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:40:03.863364  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:40:03.863381  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:40:03.930202  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:40:03.930230  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:40:06.446850  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:40:06.456936  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:40:06.457005  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:40:06.481624  947325 cri.go:89] found id: ""
	I1213 10:40:06.481638  947325 logs.go:282] 0 containers: []
	W1213 10:40:06.481645  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:40:06.481653  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:40:06.481709  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:40:06.510312  947325 cri.go:89] found id: ""
	I1213 10:40:06.510335  947325 logs.go:282] 0 containers: []
	W1213 10:40:06.510342  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:40:06.510347  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:40:06.510408  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:40:06.541422  947325 cri.go:89] found id: ""
	I1213 10:40:06.541439  947325 logs.go:282] 0 containers: []
	W1213 10:40:06.541446  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:40:06.541451  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:40:06.541511  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:40:06.567745  947325 cri.go:89] found id: ""
	I1213 10:40:06.567759  947325 logs.go:282] 0 containers: []
	W1213 10:40:06.567766  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:40:06.567771  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:40:06.567827  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:40:06.593070  947325 cri.go:89] found id: ""
	I1213 10:40:06.593085  947325 logs.go:282] 0 containers: []
	W1213 10:40:06.593092  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:40:06.593097  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:40:06.593159  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:40:06.620092  947325 cri.go:89] found id: ""
	I1213 10:40:06.620106  947325 logs.go:282] 0 containers: []
	W1213 10:40:06.620114  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:40:06.620119  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:40:06.620180  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:40:06.646655  947325 cri.go:89] found id: ""
	I1213 10:40:06.646668  947325 logs.go:282] 0 containers: []
	W1213 10:40:06.646676  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:40:06.646684  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:40:06.646695  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:40:06.713111  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:40:06.713133  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:40:06.729687  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:40:06.729703  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:40:06.811226  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:40:06.802029   17169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:06.803349   17169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:06.804038   17169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:06.805655   17169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:06.806271   17169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:40:06.802029   17169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:06.803349   17169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:06.804038   17169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:06.805655   17169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:06.806271   17169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:40:06.811237  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:40:06.811252  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:40:06.879267  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:40:06.879290  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:40:09.408425  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:40:09.418903  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:40:09.418973  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:40:09.445864  947325 cri.go:89] found id: ""
	I1213 10:40:09.445878  947325 logs.go:282] 0 containers: []
	W1213 10:40:09.445886  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:40:09.445891  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:40:09.445953  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:40:09.477028  947325 cri.go:89] found id: ""
	I1213 10:40:09.477042  947325 logs.go:282] 0 containers: []
	W1213 10:40:09.477049  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:40:09.477054  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:40:09.477114  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:40:09.503739  947325 cri.go:89] found id: ""
	I1213 10:40:09.503754  947325 logs.go:282] 0 containers: []
	W1213 10:40:09.503761  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:40:09.503766  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:40:09.503830  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:40:09.530433  947325 cri.go:89] found id: ""
	I1213 10:40:09.530449  947325 logs.go:282] 0 containers: []
	W1213 10:40:09.530458  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:40:09.530463  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:40:09.530527  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:40:09.557391  947325 cri.go:89] found id: ""
	I1213 10:40:09.557406  947325 logs.go:282] 0 containers: []
	W1213 10:40:09.557413  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:40:09.557424  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:40:09.557488  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:40:09.583991  947325 cri.go:89] found id: ""
	I1213 10:40:09.584006  947325 logs.go:282] 0 containers: []
	W1213 10:40:09.584014  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:40:09.584020  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:40:09.584084  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:40:09.610671  947325 cri.go:89] found id: ""
	I1213 10:40:09.610685  947325 logs.go:282] 0 containers: []
	W1213 10:40:09.610692  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:40:09.610701  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:40:09.610712  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:40:09.626022  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:40:09.626039  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:40:09.693054  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:40:09.684419   17265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:09.685112   17265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:09.686796   17265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:09.687319   17265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:09.689067   17265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:40:09.684419   17265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:09.685112   17265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:09.686796   17265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:09.687319   17265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:09.689067   17265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:40:09.693064  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:40:09.693077  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:40:09.767666  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:40:09.767694  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:40:09.799935  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:40:09.799953  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:40:12.366822  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:40:12.377676  947325 kubeadm.go:602] duration metric: took 4m2.920144703s to restartPrimaryControlPlane
	W1213 10:40:12.377740  947325 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1213 10:40:12.377825  947325 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1213 10:40:12.791103  947325 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 10:40:12.803671  947325 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 10:40:12.811334  947325 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 10:40:12.811389  947325 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 10:40:12.818912  947325 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 10:40:12.818922  947325 kubeadm.go:158] found existing configuration files:
	
	I1213 10:40:12.818976  947325 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1213 10:40:12.826986  947325 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 10:40:12.827043  947325 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 10:40:12.834424  947325 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1213 10:40:12.842053  947325 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 10:40:12.842110  947325 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 10:40:12.849745  947325 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1213 10:40:12.857650  947325 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 10:40:12.857707  947325 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 10:40:12.865223  947325 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1213 10:40:12.873255  947325 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 10:40:12.873315  947325 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 10:40:12.881016  947325 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 10:40:12.922045  947325 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 10:40:12.922134  947325 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 10:40:13.007876  947325 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 10:40:13.007942  947325 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 10:40:13.007977  947325 kubeadm.go:319] OS: Linux
	I1213 10:40:13.008021  947325 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 10:40:13.008068  947325 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 10:40:13.008115  947325 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 10:40:13.008162  947325 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 10:40:13.008210  947325 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 10:40:13.008257  947325 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 10:40:13.008305  947325 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 10:40:13.008352  947325 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 10:40:13.008397  947325 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 10:40:13.081346  947325 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 10:40:13.081472  947325 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 10:40:13.081605  947325 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 10:40:13.089963  947325 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 10:40:13.093587  947325 out.go:252]   - Generating certificates and keys ...
	I1213 10:40:13.093699  947325 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 10:40:13.093775  947325 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 10:40:13.093883  947325 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 10:40:13.093964  947325 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1213 10:40:13.094047  947325 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 10:40:13.094113  947325 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1213 10:40:13.094188  947325 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1213 10:40:13.094255  947325 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1213 10:40:13.094334  947325 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 10:40:13.094412  947325 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 10:40:13.094451  947325 kubeadm.go:319] [certs] Using the existing "sa" key
	I1213 10:40:13.094511  947325 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 10:40:13.317953  947325 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 10:40:13.628016  947325 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 10:40:13.956341  947325 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 10:40:14.391056  947325 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 10:40:14.663244  947325 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 10:40:14.663900  947325 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 10:40:14.666642  947325 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 10:40:14.670022  947325 out.go:252]   - Booting up control plane ...
	I1213 10:40:14.670125  947325 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 10:40:14.670202  947325 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 10:40:14.670267  947325 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 10:40:14.685196  947325 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 10:40:14.685574  947325 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 10:40:14.692785  947325 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 10:40:14.693070  947325 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 10:40:14.693112  947325 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 10:40:14.837275  947325 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 10:40:14.837410  947325 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 10:44:14.836045  947325 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00023703s
	I1213 10:44:14.836071  947325 kubeadm.go:319] 
	I1213 10:44:14.836328  947325 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 10:44:14.836386  947325 kubeadm.go:319] 	- The kubelet is not running
	I1213 10:44:14.836565  947325 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 10:44:14.836573  947325 kubeadm.go:319] 
	I1213 10:44:14.836751  947325 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 10:44:14.837048  947325 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 10:44:14.837101  947325 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 10:44:14.837105  947325 kubeadm.go:319] 
	I1213 10:44:14.841975  947325 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 10:44:14.842445  947325 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 10:44:14.842565  947325 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 10:44:14.842818  947325 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1213 10:44:14.842823  947325 kubeadm.go:319] 
	I1213 10:44:14.842900  947325 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1213 10:44:14.842999  947325 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00023703s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1213 10:44:14.843084  947325 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1213 10:44:15.255135  947325 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 10:44:15.268065  947325 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 10:44:15.268119  947325 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 10:44:15.276039  947325 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 10:44:15.276049  947325 kubeadm.go:158] found existing configuration files:
	
	I1213 10:44:15.276099  947325 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1213 10:44:15.283960  947325 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 10:44:15.284017  947325 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 10:44:15.291479  947325 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1213 10:44:15.299068  947325 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 10:44:15.299125  947325 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 10:44:15.306780  947325 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1213 10:44:15.314429  947325 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 10:44:15.314486  947325 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 10:44:15.321813  947325 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1213 10:44:15.329258  947325 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 10:44:15.329313  947325 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 10:44:15.337109  947325 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 10:44:15.375292  947325 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 10:44:15.375341  947325 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 10:44:15.450506  947325 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 10:44:15.450577  947325 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 10:44:15.450617  947325 kubeadm.go:319] OS: Linux
	I1213 10:44:15.450661  947325 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 10:44:15.450708  947325 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 10:44:15.450754  947325 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 10:44:15.450800  947325 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 10:44:15.450849  947325 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 10:44:15.450900  947325 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 10:44:15.450944  947325 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 10:44:15.450990  947325 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 10:44:15.451035  947325 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 10:44:15.530795  947325 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 10:44:15.530912  947325 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 10:44:15.531008  947325 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 10:44:15.540322  947325 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 10:44:15.543642  947325 out.go:252]   - Generating certificates and keys ...
	I1213 10:44:15.543721  947325 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 10:44:15.543784  947325 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 10:44:15.543859  947325 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 10:44:15.543918  947325 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1213 10:44:15.543987  947325 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 10:44:15.544039  947325 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1213 10:44:15.544101  947325 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1213 10:44:15.544161  947325 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1213 10:44:15.544244  947325 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 10:44:15.544319  947325 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 10:44:15.544391  947325 kubeadm.go:319] [certs] Using the existing "sa" key
	I1213 10:44:15.544447  947325 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 10:44:15.880761  947325 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 10:44:16.054505  947325 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 10:44:16.157902  947325 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 10:44:16.328847  947325 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 10:44:16.490203  947325 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 10:44:16.491055  947325 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 10:44:16.493708  947325 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 10:44:16.496861  947325 out.go:252]   - Booting up control plane ...
	I1213 10:44:16.496957  947325 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 10:44:16.497033  947325 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 10:44:16.497100  947325 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 10:44:16.511097  947325 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 10:44:16.511202  947325 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 10:44:16.518811  947325 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 10:44:16.519350  947325 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 10:44:16.519584  947325 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 10:44:16.652368  947325 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 10:44:16.652480  947325 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 10:48:16.653403  947325 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001096364s
	I1213 10:48:16.653421  947325 kubeadm.go:319] 
	I1213 10:48:16.653477  947325 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 10:48:16.653510  947325 kubeadm.go:319] 	- The kubelet is not running
	I1213 10:48:16.653633  947325 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 10:48:16.653637  947325 kubeadm.go:319] 
	I1213 10:48:16.653740  947325 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 10:48:16.653771  947325 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 10:48:16.653801  947325 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 10:48:16.653804  947325 kubeadm.go:319] 
	I1213 10:48:16.659039  947325 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 10:48:16.659521  947325 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 10:48:16.659636  947325 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 10:48:16.659899  947325 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1213 10:48:16.659915  947325 kubeadm.go:319] 
	I1213 10:48:16.659983  947325 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1213 10:48:16.660039  947325 kubeadm.go:403] duration metric: took 12m7.242563635s to StartCluster
	I1213 10:48:16.660068  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:48:16.660127  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:48:16.684783  947325 cri.go:89] found id: ""
	I1213 10:48:16.684798  947325 logs.go:282] 0 containers: []
	W1213 10:48:16.684805  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:48:16.684810  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:48:16.684871  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:48:16.709976  947325 cri.go:89] found id: ""
	I1213 10:48:16.709990  947325 logs.go:282] 0 containers: []
	W1213 10:48:16.709997  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:48:16.710001  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:48:16.710060  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:48:16.735338  947325 cri.go:89] found id: ""
	I1213 10:48:16.735351  947325 logs.go:282] 0 containers: []
	W1213 10:48:16.735358  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:48:16.735363  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:48:16.735422  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:48:16.760771  947325 cri.go:89] found id: ""
	I1213 10:48:16.760784  947325 logs.go:282] 0 containers: []
	W1213 10:48:16.760791  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:48:16.760797  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:48:16.760851  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:48:16.785193  947325 cri.go:89] found id: ""
	I1213 10:48:16.785207  947325 logs.go:282] 0 containers: []
	W1213 10:48:16.785215  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:48:16.785220  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:48:16.785280  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:48:16.811008  947325 cri.go:89] found id: ""
	I1213 10:48:16.811022  947325 logs.go:282] 0 containers: []
	W1213 10:48:16.811029  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:48:16.811034  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:48:16.811093  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:48:16.840077  947325 cri.go:89] found id: ""
	I1213 10:48:16.840092  947325 logs.go:282] 0 containers: []
	W1213 10:48:16.840099  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:48:16.840119  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:48:16.840130  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:48:16.909363  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:48:16.909386  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:48:16.924416  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:48:16.924438  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:48:17.001976  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:48:16.991502   21073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:48:16.992681   21073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:48:16.993581   21073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:48:16.995339   21073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:48:16.995963   21073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:48:16.991502   21073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:48:16.992681   21073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:48:16.993581   21073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:48:16.995339   21073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:48:16.995963   21073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:48:17.001987  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:48:17.001997  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:48:17.083059  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:48:17.083078  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 10:48:17.113855  947325 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001096364s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1213 10:48:17.113886  947325 out.go:285] * 
	W1213 10:48:17.113944  947325 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001096364s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 10:48:17.113961  947325 out.go:285] * 
	W1213 10:48:17.116079  947325 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 10:48:17.121140  947325 out.go:203] 
	W1213 10:48:17.123914  947325 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001096364s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 10:48:17.123972  947325 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1213 10:48:17.123993  947325 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1213 10:48:17.128861  947325 out.go:203] 
	
	
	==> CRI-O <==
	Dec 13 10:36:08 functional-200955 crio[9915]: time="2025-12-13T10:36:08.290540792Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 13 10:36:08 functional-200955 crio[9915]: time="2025-12-13T10:36:08.290575401Z" level=info msg="Starting seccomp notifier watcher"
	Dec 13 10:36:08 functional-200955 crio[9915]: time="2025-12-13T10:36:08.290616288Z" level=info msg="Create NRI interface"
	Dec 13 10:36:08 functional-200955 crio[9915]: time="2025-12-13T10:36:08.291085281Z" level=info msg="built-in NRI default validator is disabled"
	Dec 13 10:36:08 functional-200955 crio[9915]: time="2025-12-13T10:36:08.291114401Z" level=info msg="runtime interface created"
	Dec 13 10:36:08 functional-200955 crio[9915]: time="2025-12-13T10:36:08.291129622Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 13 10:36:08 functional-200955 crio[9915]: time="2025-12-13T10:36:08.291142299Z" level=info msg="runtime interface starting up..."
	Dec 13 10:36:08 functional-200955 crio[9915]: time="2025-12-13T10:36:08.291148937Z" level=info msg="starting plugins..."
	Dec 13 10:36:08 functional-200955 crio[9915]: time="2025-12-13T10:36:08.291165938Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 13 10:36:08 functional-200955 crio[9915]: time="2025-12-13T10:36:08.291236782Z" level=info msg="No systemd watchdog enabled"
	Dec 13 10:36:08 functional-200955 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 13 10:40:13 functional-200955 crio[9915]: time="2025-12-13T10:40:13.084834397Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=b5efff79-46eb-41f2-bde4-db3ba9dab38c name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:40:13 functional-200955 crio[9915]: time="2025-12-13T10:40:13.08566844Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=5615dd29-1801-45cf-b9ec-bc2670925ce8 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:40:13 functional-200955 crio[9915]: time="2025-12-13T10:40:13.086277701Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=27142b95-3cc3-4adb-a2df-9868044a9998 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:40:13 functional-200955 crio[9915]: time="2025-12-13T10:40:13.086727642Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=22595c5f-3db5-4062-8e04-cb17f6bc794b name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:40:13 functional-200955 crio[9915]: time="2025-12-13T10:40:13.087217057Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=4772ea3e-d27c-4029-bb8e-c23e148a40e4 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:40:13 functional-200955 crio[9915]: time="2025-12-13T10:40:13.08768738Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=9219a2d6-ec51-448e-87c0-444e5d98b53a name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:40:13 functional-200955 crio[9915]: time="2025-12-13T10:40:13.088157391Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=a67c2fca-67f5-45c5-89da-71309b05610c name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:44:15 functional-200955 crio[9915]: time="2025-12-13T10:44:15.534115746Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=3049b4b3-14f8-431e-ab4d-c6efa4a37dac name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:44:15 functional-200955 crio[9915]: time="2025-12-13T10:44:15.535316398Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=aca0cbac-b5e4-4959-a768-b532f9c78063 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:44:15 functional-200955 crio[9915]: time="2025-12-13T10:44:15.53607634Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=4f458fd4-6b17-4cc2-8b0b-32f7a700d5d6 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:44:15 functional-200955 crio[9915]: time="2025-12-13T10:44:15.536719579Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=93897364-2c92-4299-ac1e-dfb20638840a name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:44:15 functional-200955 crio[9915]: time="2025-12-13T10:44:15.538084483Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=80c3abea-faad-48a9-8be1-ff63680847aa name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:44:15 functional-200955 crio[9915]: time="2025-12-13T10:44:15.538942002Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=9f779e3b-3069-476b-9013-f486002774b8 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:44:15 functional-200955 crio[9915]: time="2025-12-13T10:44:15.539437793Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=de504892-ee6f-46b3-8ac6-2712427d6188 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:48:20.512087   21333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:48:20.512578   21333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:48:20.514270   21333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:48:20.514839   21333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:48:20.516457   21333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec13 10:10] kauditd_printk_skb: 8 callbacks suppressed
	[Dec13 10:11] overlayfs: idmapped layers are currently not supported
	[  +0.076161] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec13 10:17] overlayfs: idmapped layers are currently not supported
	[Dec13 10:18] overlayfs: idmapped layers are currently not supported
	[Dec13 10:35] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 10:48:20 up  5:30,  0 user,  load average: 0.33, 0.22, 0.52
	Linux functional-200955 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 10:48:17 functional-200955 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:48:18 functional-200955 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 962.
	Dec 13 10:48:18 functional-200955 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:48:18 functional-200955 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:48:18 functional-200955 kubelet[21209]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 10:48:18 functional-200955 kubelet[21209]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 10:48:18 functional-200955 kubelet[21209]: E1213 10:48:18.546395   21209 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:48:18 functional-200955 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:48:18 functional-200955 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:48:19 functional-200955 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 963.
	Dec 13 10:48:19 functional-200955 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:48:19 functional-200955 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:48:19 functional-200955 kubelet[21227]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 10:48:19 functional-200955 kubelet[21227]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 10:48:19 functional-200955 kubelet[21227]: E1213 10:48:19.255998   21227 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:48:19 functional-200955 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:48:19 functional-200955 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:48:19 functional-200955 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 964.
	Dec 13 10:48:19 functional-200955 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:48:19 functional-200955 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:48:20 functional-200955 kubelet[21250]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 10:48:20 functional-200955 kubelet[21250]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 10:48:20 functional-200955 kubelet[21250]: E1213 10:48:20.039167   21250 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:48:20 functional-200955 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:48:20 functional-200955 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-200955 -n functional-200955
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-200955 -n functional-200955: exit status 2 (353.53877ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-200955" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (2.28s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-200955 apply -f testdata/invalidsvc.yaml
functional_test.go:2326: (dbg) Non-zero exit: kubectl --context functional-200955 apply -f testdata/invalidsvc.yaml: exit status 1 (57.199543ms)

                                                
                                                
** stderr ** 
	error: error validating "testdata/invalidsvc.yaml": error validating data: failed to download openapi: Get "https://192.168.49.2:8441/openapi/v2?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test.go:2328: kubectl --context functional-200955 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (1.76s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-200955 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-200955 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-200955 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-200955 --alsologtostderr -v=1] stderr:
I1213 10:50:23.290852  964615 out.go:360] Setting OutFile to fd 1 ...
I1213 10:50:23.291015  964615 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 10:50:23.291037  964615 out.go:374] Setting ErrFile to fd 2...
I1213 10:50:23.291054  964615 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 10:50:23.291332  964615 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-904040/.minikube/bin
I1213 10:50:23.291604  964615 mustload.go:66] Loading cluster: functional-200955
I1213 10:50:23.292104  964615 config.go:182] Loaded profile config "functional-200955": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1213 10:50:23.292681  964615 cli_runner.go:164] Run: docker container inspect functional-200955 --format={{.State.Status}}
I1213 10:50:23.311691  964615 host.go:66] Checking if "functional-200955" exists ...
I1213 10:50:23.312060  964615 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1213 10:50:23.377095  964615 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 10:50:23.366786183 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1213 10:50:23.377213  964615 api_server.go:166] Checking apiserver status ...
I1213 10:50:23.377275  964615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1213 10:50:23.377313  964615 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-200955
I1213 10:50:23.395503  964615 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33523 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/functional-200955/id_rsa Username:docker}
W1213 10:50:23.507565  964615 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1213 10:50:23.511700  964615 out.go:179] * The control-plane node functional-200955 apiserver is not running: (state=Stopped)
I1213 10:50:23.515295  964615 out.go:179]   To start a cluster, run: "minikube start -p functional-200955"
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-200955
helpers_test.go:244: (dbg) docker inspect functional-200955:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8d53cd00da87a9082532fc43ffe108cc46c100c4d52d598de1abc5c03d05f0a2",
	        "Created": "2025-12-13T10:21:24.063231347Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 935996,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T10:21:24.120776444Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/8d53cd00da87a9082532fc43ffe108cc46c100c4d52d598de1abc5c03d05f0a2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8d53cd00da87a9082532fc43ffe108cc46c100c4d52d598de1abc5c03d05f0a2/hostname",
	        "HostsPath": "/var/lib/docker/containers/8d53cd00da87a9082532fc43ffe108cc46c100c4d52d598de1abc5c03d05f0a2/hosts",
	        "LogPath": "/var/lib/docker/containers/8d53cd00da87a9082532fc43ffe108cc46c100c4d52d598de1abc5c03d05f0a2/8d53cd00da87a9082532fc43ffe108cc46c100c4d52d598de1abc5c03d05f0a2-json.log",
	        "Name": "/functional-200955",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-200955:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-200955",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8d53cd00da87a9082532fc43ffe108cc46c100c4d52d598de1abc5c03d05f0a2",
	                "LowerDir": "/var/lib/docker/overlay2/88eeb10fcc1b499e31bc1f6077feaba1c1c5b1a510066e3c5f3c7fab1032a7a8-init/diff:/var/lib/docker/overlay2/ae644fe0cc2841f5eea1cee1fab5fa62406b5368ff2c4f1e7ca42815e94a37ad/diff",
	                "MergedDir": "/var/lib/docker/overlay2/88eeb10fcc1b499e31bc1f6077feaba1c1c5b1a510066e3c5f3c7fab1032a7a8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/88eeb10fcc1b499e31bc1f6077feaba1c1c5b1a510066e3c5f3c7fab1032a7a8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/88eeb10fcc1b499e31bc1f6077feaba1c1c5b1a510066e3c5f3c7fab1032a7a8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-200955",
	                "Source": "/var/lib/docker/volumes/functional-200955/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-200955",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-200955",
	                "name.minikube.sigs.k8s.io": "functional-200955",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "766cddaf684c9eda3444b59c94594c94772112ec8d9beb3bf9ab0dee27a031f7",
	            "SandboxKey": "/var/run/docker/netns/766cddaf684c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33523"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33524"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33527"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33525"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33526"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-200955": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "26:41:8f:b5:13:ba",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "cc1684d1fcbfd40cf35af7d1687322fe1e1f6c4d0d51bbc510daab317bab57d4",
	                    "EndpointID": "480d7cd674d03dbe8a8b029c866cc993844939c5b39aa63c9b0d9188a61c29a3",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-200955",
	                        "8d53cd00da87"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-200955 -n functional-200955
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-200955 -n functional-200955: exit status 2 (320.496779ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-200955 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                                        ARGS                                                                         │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ service   │ functional-200955 service hello-node --url --format={{.IP}}                                                                                         │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:50 UTC │                     │
	│ service   │ functional-200955 service hello-node --url                                                                                                          │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:50 UTC │                     │
	│ mount     │ -p functional-200955 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo4252008234/001:/mount-9p --alsologtostderr -v=1              │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:50 UTC │                     │
	│ ssh       │ functional-200955 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:50 UTC │                     │
	│ ssh       │ functional-200955 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:50 UTC │ 13 Dec 25 10:50 UTC │
	│ ssh       │ functional-200955 ssh -- ls -la /mount-9p                                                                                                           │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:50 UTC │ 13 Dec 25 10:50 UTC │
	│ ssh       │ functional-200955 ssh cat /mount-9p/test-1765623013843816415                                                                                        │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:50 UTC │ 13 Dec 25 10:50 UTC │
	│ ssh       │ functional-200955 ssh mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates                                                                    │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:50 UTC │                     │
	│ ssh       │ functional-200955 ssh sudo umount -f /mount-9p                                                                                                      │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:50 UTC │ 13 Dec 25 10:50 UTC │
	│ mount     │ -p functional-200955 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2010494100/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:50 UTC │                     │
	│ ssh       │ functional-200955 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:50 UTC │                     │
	│ ssh       │ functional-200955 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:50 UTC │ 13 Dec 25 10:50 UTC │
	│ ssh       │ functional-200955 ssh -- ls -la /mount-9p                                                                                                           │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:50 UTC │ 13 Dec 25 10:50 UTC │
	│ ssh       │ functional-200955 ssh sudo umount -f /mount-9p                                                                                                      │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:50 UTC │                     │
	│ mount     │ -p functional-200955 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo608454427/001:/mount1 --alsologtostderr -v=1                 │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:50 UTC │                     │
	│ mount     │ -p functional-200955 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo608454427/001:/mount2 --alsologtostderr -v=1                 │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:50 UTC │                     │
	│ mount     │ -p functional-200955 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo608454427/001:/mount3 --alsologtostderr -v=1                 │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:50 UTC │                     │
	│ ssh       │ functional-200955 ssh findmnt -T /mount1                                                                                                            │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:50 UTC │ 13 Dec 25 10:50 UTC │
	│ ssh       │ functional-200955 ssh findmnt -T /mount2                                                                                                            │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:50 UTC │ 13 Dec 25 10:50 UTC │
	│ ssh       │ functional-200955 ssh findmnt -T /mount3                                                                                                            │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:50 UTC │ 13 Dec 25 10:50 UTC │
	│ mount     │ -p functional-200955 --kill=true                                                                                                                    │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:50 UTC │                     │
	│ start     │ -p functional-200955 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0       │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:50 UTC │                     │
	│ start     │ -p functional-200955 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0       │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:50 UTC │                     │
	│ start     │ -p functional-200955 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                 │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:50 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-200955 --alsologtostderr -v=1                                                                                      │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:50 UTC │                     │
	└───────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 10:50:23
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 10:50:23.051722  964543 out.go:360] Setting OutFile to fd 1 ...
	I1213 10:50:23.051865  964543 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:50:23.051877  964543 out.go:374] Setting ErrFile to fd 2...
	I1213 10:50:23.051883  964543 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:50:23.052137  964543 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-904040/.minikube/bin
	I1213 10:50:23.054351  964543 out.go:368] Setting JSON to false
	I1213 10:50:23.055400  964543 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":19972,"bootTime":1765603051,"procs":158,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1213 10:50:23.055509  964543 start.go:143] virtualization:  
	I1213 10:50:23.058700  964543 out.go:179] * [functional-200955] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 10:50:23.060861  964543 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 10:50:23.061000  964543 notify.go:221] Checking for updates...
	I1213 10:50:23.066718  964543 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 10:50:23.069526  964543 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22128-904040/kubeconfig
	I1213 10:50:23.072436  964543 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-904040/.minikube
	I1213 10:50:23.075311  964543 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 10:50:23.078155  964543 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 10:50:23.081486  964543 config.go:182] Loaded profile config "functional-200955": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 10:50:23.082214  964543 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 10:50:23.107165  964543 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 10:50:23.107352  964543 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:50:23.167099  964543 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 10:50:23.156424817 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:50:23.167226  964543 docker.go:319] overlay module found
	I1213 10:50:23.170224  964543 out.go:179] * Using the docker driver based on existing profile
	I1213 10:50:23.173134  964543 start.go:309] selected driver: docker
	I1213 10:50:23.173150  964543 start.go:927] validating driver "docker" against &{Name:functional-200955 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-200955 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:50:23.173256  964543 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 10:50:23.173374  964543 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:50:23.229761  964543 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 10:50:23.219915868 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:50:23.230225  964543 cni.go:84] Creating CNI manager for ""
	I1213 10:50:23.230291  964543 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 10:50:23.230341  964543 start.go:353] cluster config:
	{Name:functional-200955 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-200955 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:50:23.233884  964543 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Dec 13 10:36:08 functional-200955 crio[9915]: time="2025-12-13T10:36:08.290540792Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 13 10:36:08 functional-200955 crio[9915]: time="2025-12-13T10:36:08.290575401Z" level=info msg="Starting seccomp notifier watcher"
	Dec 13 10:36:08 functional-200955 crio[9915]: time="2025-12-13T10:36:08.290616288Z" level=info msg="Create NRI interface"
	Dec 13 10:36:08 functional-200955 crio[9915]: time="2025-12-13T10:36:08.291085281Z" level=info msg="built-in NRI default validator is disabled"
	Dec 13 10:36:08 functional-200955 crio[9915]: time="2025-12-13T10:36:08.291114401Z" level=info msg="runtime interface created"
	Dec 13 10:36:08 functional-200955 crio[9915]: time="2025-12-13T10:36:08.291129622Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 13 10:36:08 functional-200955 crio[9915]: time="2025-12-13T10:36:08.291142299Z" level=info msg="runtime interface starting up..."
	Dec 13 10:36:08 functional-200955 crio[9915]: time="2025-12-13T10:36:08.291148937Z" level=info msg="starting plugins..."
	Dec 13 10:36:08 functional-200955 crio[9915]: time="2025-12-13T10:36:08.291165938Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 13 10:36:08 functional-200955 crio[9915]: time="2025-12-13T10:36:08.291236782Z" level=info msg="No systemd watchdog enabled"
	Dec 13 10:36:08 functional-200955 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 13 10:40:13 functional-200955 crio[9915]: time="2025-12-13T10:40:13.084834397Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=b5efff79-46eb-41f2-bde4-db3ba9dab38c name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:40:13 functional-200955 crio[9915]: time="2025-12-13T10:40:13.08566844Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=5615dd29-1801-45cf-b9ec-bc2670925ce8 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:40:13 functional-200955 crio[9915]: time="2025-12-13T10:40:13.086277701Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=27142b95-3cc3-4adb-a2df-9868044a9998 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:40:13 functional-200955 crio[9915]: time="2025-12-13T10:40:13.086727642Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=22595c5f-3db5-4062-8e04-cb17f6bc794b name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:40:13 functional-200955 crio[9915]: time="2025-12-13T10:40:13.087217057Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=4772ea3e-d27c-4029-bb8e-c23e148a40e4 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:40:13 functional-200955 crio[9915]: time="2025-12-13T10:40:13.08768738Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=9219a2d6-ec51-448e-87c0-444e5d98b53a name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:40:13 functional-200955 crio[9915]: time="2025-12-13T10:40:13.088157391Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=a67c2fca-67f5-45c5-89da-71309b05610c name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:44:15 functional-200955 crio[9915]: time="2025-12-13T10:44:15.534115746Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=3049b4b3-14f8-431e-ab4d-c6efa4a37dac name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:44:15 functional-200955 crio[9915]: time="2025-12-13T10:44:15.535316398Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=aca0cbac-b5e4-4959-a768-b532f9c78063 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:44:15 functional-200955 crio[9915]: time="2025-12-13T10:44:15.53607634Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=4f458fd4-6b17-4cc2-8b0b-32f7a700d5d6 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:44:15 functional-200955 crio[9915]: time="2025-12-13T10:44:15.536719579Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=93897364-2c92-4299-ac1e-dfb20638840a name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:44:15 functional-200955 crio[9915]: time="2025-12-13T10:44:15.538084483Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=80c3abea-faad-48a9-8be1-ff63680847aa name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:44:15 functional-200955 crio[9915]: time="2025-12-13T10:44:15.538942002Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=9f779e3b-3069-476b-9013-f486002774b8 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:44:15 functional-200955 crio[9915]: time="2025-12-13T10:44:15.539437793Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=de504892-ee6f-46b3-8ac6-2712427d6188 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:50:24.588317   23397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:24.588919   23397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:24.590529   23397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:24.590909   23397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:24.592574   23397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec13 10:10] kauditd_printk_skb: 8 callbacks suppressed
	[Dec13 10:11] overlayfs: idmapped layers are currently not supported
	[  +0.076161] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec13 10:17] overlayfs: idmapped layers are currently not supported
	[Dec13 10:18] overlayfs: idmapped layers are currently not supported
	[Dec13 10:35] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 10:50:24 up  5:32,  0 user,  load average: 0.53, 0.31, 0.52
	Linux functional-200955 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 10:50:22 functional-200955 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:50:22 functional-200955 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1128.
	Dec 13 10:50:22 functional-200955 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:50:22 functional-200955 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:50:23 functional-200955 kubelet[23279]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 10:50:23 functional-200955 kubelet[23279]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 10:50:23 functional-200955 kubelet[23279]: E1213 10:50:23.034897   23279 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:50:23 functional-200955 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:50:23 functional-200955 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:50:23 functional-200955 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1129.
	Dec 13 10:50:23 functional-200955 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:50:23 functional-200955 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:50:23 functional-200955 kubelet[23293]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 10:50:23 functional-200955 kubelet[23293]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 10:50:23 functional-200955 kubelet[23293]: E1213 10:50:23.798471   23293 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:50:23 functional-200955 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:50:23 functional-200955 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:50:24 functional-200955 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1130.
	Dec 13 10:50:24 functional-200955 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:50:24 functional-200955 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:50:24 functional-200955 kubelet[23381]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 10:50:24 functional-200955 kubelet[23381]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 10:50:24 functional-200955 kubelet[23381]: E1213 10:50:24.539225   23381 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:50:24 functional-200955 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:50:24 functional-200955 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-200955 -n functional-200955
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-200955 -n functional-200955: exit status 2 (317.56623ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-200955" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (1.76s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (3.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-200955 status
functional_test.go:869: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-200955 status: exit status 2 (310.262391ms)

                                                
                                                
-- stdout --
	functional-200955
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	

                                                
                                                
-- /stdout --
functional_test.go:871: failed to run minikube status. args "out/minikube-linux-arm64 -p functional-200955 status" : exit status 2
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-200955 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:875: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-200955 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 2 (325.029945ms)

                                                
                                                
-- stdout --
	host:Running,kublet:Stopped,apiserver:Stopped,kubeconfig:Configured

                                                
                                                
-- /stdout --
functional_test.go:877: failed to run minikube status with custom format: args "out/minikube-linux-arm64 -p functional-200955 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 2
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-200955 status -o json
functional_test.go:887: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-200955 status -o json: exit status 2 (308.266835ms)

                                                
                                                
-- stdout --
	{"Name":"functional-200955","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:889: failed to run minikube status with json output. args "out/minikube-linux-arm64 -p functional-200955 status -o json" : exit status 2
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-200955
helpers_test.go:244: (dbg) docker inspect functional-200955:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8d53cd00da87a9082532fc43ffe108cc46c100c4d52d598de1abc5c03d05f0a2",
	        "Created": "2025-12-13T10:21:24.063231347Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 935996,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T10:21:24.120776444Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/8d53cd00da87a9082532fc43ffe108cc46c100c4d52d598de1abc5c03d05f0a2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8d53cd00da87a9082532fc43ffe108cc46c100c4d52d598de1abc5c03d05f0a2/hostname",
	        "HostsPath": "/var/lib/docker/containers/8d53cd00da87a9082532fc43ffe108cc46c100c4d52d598de1abc5c03d05f0a2/hosts",
	        "LogPath": "/var/lib/docker/containers/8d53cd00da87a9082532fc43ffe108cc46c100c4d52d598de1abc5c03d05f0a2/8d53cd00da87a9082532fc43ffe108cc46c100c4d52d598de1abc5c03d05f0a2-json.log",
	        "Name": "/functional-200955",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-200955:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-200955",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8d53cd00da87a9082532fc43ffe108cc46c100c4d52d598de1abc5c03d05f0a2",
	                "LowerDir": "/var/lib/docker/overlay2/88eeb10fcc1b499e31bc1f6077feaba1c1c5b1a510066e3c5f3c7fab1032a7a8-init/diff:/var/lib/docker/overlay2/ae644fe0cc2841f5eea1cee1fab5fa62406b5368ff2c4f1e7ca42815e94a37ad/diff",
	                "MergedDir": "/var/lib/docker/overlay2/88eeb10fcc1b499e31bc1f6077feaba1c1c5b1a510066e3c5f3c7fab1032a7a8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/88eeb10fcc1b499e31bc1f6077feaba1c1c5b1a510066e3c5f3c7fab1032a7a8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/88eeb10fcc1b499e31bc1f6077feaba1c1c5b1a510066e3c5f3c7fab1032a7a8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-200955",
	                "Source": "/var/lib/docker/volumes/functional-200955/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-200955",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-200955",
	                "name.minikube.sigs.k8s.io": "functional-200955",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "766cddaf684c9eda3444b59c94594c94772112ec8d9beb3bf9ab0dee27a031f7",
	            "SandboxKey": "/var/run/docker/netns/766cddaf684c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33523"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33524"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33527"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33525"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33526"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-200955": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "26:41:8f:b5:13:ba",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "cc1684d1fcbfd40cf35af7d1687322fe1e1f6c4d0d51bbc510daab317bab57d4",
	                    "EndpointID": "480d7cd674d03dbe8a8b029c866cc993844939c5b39aa63c9b0d9188a61c29a3",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-200955",
	                        "8d53cd00da87"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-200955 -n functional-200955
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-200955 -n functional-200955: exit status 2 (325.643777ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-200955 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                        ARGS                                                                         │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ addons  │ functional-200955 addons list -o json                                                                                                               │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:50 UTC │ 13 Dec 25 10:50 UTC │
	│ service │ functional-200955 service list                                                                                                                      │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:50 UTC │                     │
	│ service │ functional-200955 service list -o json                                                                                                              │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:50 UTC │                     │
	│ service │ functional-200955 service --namespace=default --https --url hello-node                                                                              │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:50 UTC │                     │
	│ service │ functional-200955 service hello-node --url --format={{.IP}}                                                                                         │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:50 UTC │                     │
	│ service │ functional-200955 service hello-node --url                                                                                                          │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:50 UTC │                     │
	│ mount   │ -p functional-200955 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo4252008234/001:/mount-9p --alsologtostderr -v=1              │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:50 UTC │                     │
	│ ssh     │ functional-200955 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:50 UTC │                     │
	│ ssh     │ functional-200955 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:50 UTC │ 13 Dec 25 10:50 UTC │
	│ ssh     │ functional-200955 ssh -- ls -la /mount-9p                                                                                                           │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:50 UTC │ 13 Dec 25 10:50 UTC │
	│ ssh     │ functional-200955 ssh cat /mount-9p/test-1765623013843816415                                                                                        │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:50 UTC │ 13 Dec 25 10:50 UTC │
	│ ssh     │ functional-200955 ssh mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates                                                                    │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:50 UTC │                     │
	│ ssh     │ functional-200955 ssh sudo umount -f /mount-9p                                                                                                      │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:50 UTC │ 13 Dec 25 10:50 UTC │
	│ mount   │ -p functional-200955 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2010494100/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:50 UTC │                     │
	│ ssh     │ functional-200955 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:50 UTC │                     │
	│ ssh     │ functional-200955 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:50 UTC │ 13 Dec 25 10:50 UTC │
	│ ssh     │ functional-200955 ssh -- ls -la /mount-9p                                                                                                           │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:50 UTC │ 13 Dec 25 10:50 UTC │
	│ ssh     │ functional-200955 ssh sudo umount -f /mount-9p                                                                                                      │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:50 UTC │                     │
	│ mount   │ -p functional-200955 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo608454427/001:/mount1 --alsologtostderr -v=1                 │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:50 UTC │                     │
	│ mount   │ -p functional-200955 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo608454427/001:/mount2 --alsologtostderr -v=1                 │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:50 UTC │                     │
	│ mount   │ -p functional-200955 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo608454427/001:/mount3 --alsologtostderr -v=1                 │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:50 UTC │                     │
	│ ssh     │ functional-200955 ssh findmnt -T /mount1                                                                                                            │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:50 UTC │ 13 Dec 25 10:50 UTC │
	│ ssh     │ functional-200955 ssh findmnt -T /mount2                                                                                                            │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:50 UTC │ 13 Dec 25 10:50 UTC │
	│ ssh     │ functional-200955 ssh findmnt -T /mount3                                                                                                            │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:50 UTC │ 13 Dec 25 10:50 UTC │
	│ mount   │ -p functional-200955 --kill=true                                                                                                                    │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:50 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 10:36:05
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 10:36:05.024663  947325 out.go:360] Setting OutFile to fd 1 ...
	I1213 10:36:05.024857  947325 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:36:05.024862  947325 out.go:374] Setting ErrFile to fd 2...
	I1213 10:36:05.024867  947325 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:36:05.025148  947325 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-904040/.minikube/bin
	I1213 10:36:05.025578  947325 out.go:368] Setting JSON to false
	I1213 10:36:05.026512  947325 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":19114,"bootTime":1765603051,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1213 10:36:05.026573  947325 start.go:143] virtualization:  
	I1213 10:36:05.030119  947325 out.go:179] * [functional-200955] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 10:36:05.033180  947325 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 10:36:05.033273  947325 notify.go:221] Checking for updates...
	I1213 10:36:05.036966  947325 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 10:36:05.041647  947325 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22128-904040/kubeconfig
	I1213 10:36:05.044535  947325 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-904040/.minikube
	I1213 10:36:05.047483  947325 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 10:36:05.050413  947325 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 10:36:05.053885  947325 config.go:182] Loaded profile config "functional-200955": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 10:36:05.053982  947325 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 10:36:05.081037  947325 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 10:36:05.081166  947325 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:36:05.151201  947325 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-13 10:36:05.14075062 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:36:05.151307  947325 docker.go:319] overlay module found
	I1213 10:36:05.154359  947325 out.go:179] * Using the docker driver based on existing profile
	I1213 10:36:05.157187  947325 start.go:309] selected driver: docker
	I1213 10:36:05.157194  947325 start.go:927] validating driver "docker" against &{Name:functional-200955 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-200955 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:36:05.157283  947325 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 10:36:05.157388  947325 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:36:05.214971  947325 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-13 10:36:05.204866403 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:36:05.215380  947325 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 10:36:05.215410  947325 cni.go:84] Creating CNI manager for ""
	I1213 10:36:05.215457  947325 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 10:36:05.215500  947325 start.go:353] cluster config:
	{Name:functional-200955 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-200955 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:36:05.218694  947325 out.go:179] * Starting "functional-200955" primary control-plane node in "functional-200955" cluster
	I1213 10:36:05.221699  947325 cache.go:134] Beginning downloading kic base image for docker with crio
	I1213 10:36:05.224563  947325 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 10:36:05.227409  947325 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 10:36:05.227448  947325 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22128-904040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1213 10:36:05.227455  947325 cache.go:65] Caching tarball of preloaded images
	I1213 10:36:05.227491  947325 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 10:36:05.227538  947325 preload.go:238] Found /home/jenkins/minikube-integration/22128-904040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1213 10:36:05.227551  947325 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1213 10:36:05.227666  947325 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/config.json ...
	I1213 10:36:05.247494  947325 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 10:36:05.247505  947325 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 10:36:05.247518  947325 cache.go:243] Successfully downloaded all kic artifacts
	I1213 10:36:05.247549  947325 start.go:360] acquireMachinesLock for functional-200955: {Name:mkc5e96275d9db4dc69c44a1e3c60b6575a1e73a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 10:36:05.247604  947325 start.go:364] duration metric: took 37.317µs to acquireMachinesLock for "functional-200955"
	I1213 10:36:05.247623  947325 start.go:96] Skipping create...Using existing machine configuration
	I1213 10:36:05.247627  947325 fix.go:54] fixHost starting: 
	I1213 10:36:05.247894  947325 cli_runner.go:164] Run: docker container inspect functional-200955 --format={{.State.Status}}
	I1213 10:36:05.265053  947325 fix.go:112] recreateIfNeeded on functional-200955: state=Running err=<nil>
	W1213 10:36:05.265102  947325 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 10:36:05.268458  947325 out.go:252] * Updating the running docker "functional-200955" container ...
	I1213 10:36:05.268485  947325 machine.go:94] provisionDockerMachine start ...
	I1213 10:36:05.268569  947325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-200955
	I1213 10:36:05.285699  947325 main.go:143] libmachine: Using SSH client type: native
	I1213 10:36:05.286021  947325 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33523 <nil> <nil>}
	I1213 10:36:05.286027  947325 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 10:36:05.433614  947325 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-200955
	
	I1213 10:36:05.433628  947325 ubuntu.go:182] provisioning hostname "functional-200955"
	I1213 10:36:05.433698  947325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-200955
	I1213 10:36:05.452166  947325 main.go:143] libmachine: Using SSH client type: native
	I1213 10:36:05.452470  947325 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33523 <nil> <nil>}
	I1213 10:36:05.452478  947325 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-200955 && echo "functional-200955" | sudo tee /etc/hostname
	I1213 10:36:05.611951  947325 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-200955
	
	I1213 10:36:05.612044  947325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-200955
	I1213 10:36:05.630892  947325 main.go:143] libmachine: Using SSH client type: native
	I1213 10:36:05.631191  947325 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33523 <nil> <nil>}
	I1213 10:36:05.631205  947325 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-200955' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-200955/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-200955' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 10:36:05.782771  947325 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 10:36:05.782787  947325 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22128-904040/.minikube CaCertPath:/home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22128-904040/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22128-904040/.minikube}
	I1213 10:36:05.782810  947325 ubuntu.go:190] setting up certificates
	I1213 10:36:05.782824  947325 provision.go:84] configureAuth start
	I1213 10:36:05.782884  947325 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-200955
	I1213 10:36:05.800513  947325 provision.go:143] copyHostCerts
	I1213 10:36:05.800580  947325 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-904040/.minikube/ca.pem, removing ...
	I1213 10:36:05.800588  947325 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-904040/.minikube/ca.pem
	I1213 10:36:05.800662  947325 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22128-904040/.minikube/ca.pem (1082 bytes)
	I1213 10:36:05.800773  947325 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-904040/.minikube/cert.pem, removing ...
	I1213 10:36:05.800777  947325 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-904040/.minikube/cert.pem
	I1213 10:36:05.800802  947325 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22128-904040/.minikube/cert.pem (1123 bytes)
	I1213 10:36:05.800861  947325 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-904040/.minikube/key.pem, removing ...
	I1213 10:36:05.800865  947325 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-904040/.minikube/key.pem
	I1213 10:36:05.800887  947325 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22128-904040/.minikube/key.pem (1675 bytes)
	I1213 10:36:05.800938  947325 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22128-904040/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca-key.pem org=jenkins.functional-200955 san=[127.0.0.1 192.168.49.2 functional-200955 localhost minikube]
	I1213 10:36:06.162765  947325 provision.go:177] copyRemoteCerts
	I1213 10:36:06.162821  947325 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 10:36:06.162864  947325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-200955
	I1213 10:36:06.179964  947325 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33523 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/functional-200955/id_rsa Username:docker}
	I1213 10:36:06.285273  947325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 10:36:06.303138  947325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 10:36:06.321000  947325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 10:36:06.339159  947325 provision.go:87] duration metric: took 556.311814ms to configureAuth
	I1213 10:36:06.339177  947325 ubuntu.go:206] setting minikube options for container-runtime
	I1213 10:36:06.339382  947325 config.go:182] Loaded profile config "functional-200955": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 10:36:06.339492  947325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-200955
	I1213 10:36:06.357323  947325 main.go:143] libmachine: Using SSH client type: native
	I1213 10:36:06.357649  947325 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33523 <nil> <nil>}
	I1213 10:36:06.357662  947325 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 10:36:06.705283  947325 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 10:36:06.705297  947325 machine.go:97] duration metric: took 1.436804594s to provisionDockerMachine
	I1213 10:36:06.705307  947325 start.go:293] postStartSetup for "functional-200955" (driver="docker")
	I1213 10:36:06.705318  947325 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 10:36:06.705379  947325 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 10:36:06.705435  947325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-200955
	I1213 10:36:06.722886  947325 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33523 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/functional-200955/id_rsa Username:docker}
	I1213 10:36:06.829449  947325 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 10:36:06.832816  947325 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 10:36:06.832847  947325 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 10:36:06.832858  947325 filesync.go:126] Scanning /home/jenkins/minikube-integration/22128-904040/.minikube/addons for local assets ...
	I1213 10:36:06.832914  947325 filesync.go:126] Scanning /home/jenkins/minikube-integration/22128-904040/.minikube/files for local assets ...
	I1213 10:36:06.832996  947325 filesync.go:149] local asset: /home/jenkins/minikube-integration/22128-904040/.minikube/files/etc/ssl/certs/9074842.pem -> 9074842.pem in /etc/ssl/certs
	I1213 10:36:06.833088  947325 filesync.go:149] local asset: /home/jenkins/minikube-integration/22128-904040/.minikube/files/etc/test/nested/copy/907484/hosts -> hosts in /etc/test/nested/copy/907484
	I1213 10:36:06.833134  947325 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/907484
	I1213 10:36:06.840686  947325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/files/etc/ssl/certs/9074842.pem --> /etc/ssl/certs/9074842.pem (1708 bytes)
	I1213 10:36:06.859025  947325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/files/etc/test/nested/copy/907484/hosts --> /etc/test/nested/copy/907484/hosts (40 bytes)
	I1213 10:36:06.877717  947325 start.go:296] duration metric: took 172.395592ms for postStartSetup
	I1213 10:36:06.877814  947325 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 10:36:06.877857  947325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-200955
	I1213 10:36:06.896880  947325 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33523 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/functional-200955/id_rsa Username:docker}
	I1213 10:36:06.998897  947325 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 10:36:07.005668  947325 fix.go:56] duration metric: took 1.758032508s for fixHost
	I1213 10:36:07.005685  947325 start.go:83] releasing machines lock for "functional-200955", held for 1.758074248s
	I1213 10:36:07.005790  947325 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-200955
	I1213 10:36:07.024345  947325 ssh_runner.go:195] Run: cat /version.json
	I1213 10:36:07.024397  947325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-200955
	I1213 10:36:07.024410  947325 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 10:36:07.024473  947325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-200955
	I1213 10:36:07.045627  947325 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33523 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/functional-200955/id_rsa Username:docker}
	I1213 10:36:07.056017  947325 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33523 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/functional-200955/id_rsa Username:docker}
	I1213 10:36:07.235962  947325 ssh_runner.go:195] Run: systemctl --version
	I1213 10:36:07.243338  947325 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 10:36:07.293399  947325 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 10:36:07.297828  947325 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 10:36:07.297890  947325 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 10:36:07.305998  947325 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 10:36:07.306012  947325 start.go:496] detecting cgroup driver to use...
	I1213 10:36:07.306043  947325 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 10:36:07.306089  947325 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 10:36:07.321360  947325 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 10:36:07.334818  947325 docker.go:218] disabling cri-docker service (if available) ...
	I1213 10:36:07.334873  947325 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 10:36:07.350268  947325 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 10:36:07.363266  947325 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 10:36:07.482802  947325 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 10:36:07.601250  947325 docker.go:234] disabling docker service ...
	I1213 10:36:07.601314  947325 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 10:36:07.616649  947325 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 10:36:07.630193  947325 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 10:36:07.750803  947325 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 10:36:07.872755  947325 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 10:36:07.885775  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 10:36:07.901044  947325 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 10:36:07.901118  947325 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:36:07.910913  947325 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 10:36:07.910999  947325 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:36:07.920242  947325 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:36:07.929183  947325 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:36:07.938207  947325 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 10:36:07.946601  947325 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:36:07.956231  947325 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:36:07.964904  947325 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:36:07.974470  947325 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 10:36:07.983694  947325 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 10:36:07.992492  947325 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:36:08.121808  947325 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 10:36:08.297420  947325 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 10:36:08.297489  947325 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 10:36:08.301247  947325 start.go:564] Will wait 60s for crictl version
	I1213 10:36:08.301305  947325 ssh_runner.go:195] Run: which crictl
	I1213 10:36:08.304718  947325 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 10:36:08.329152  947325 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1213 10:36:08.329228  947325 ssh_runner.go:195] Run: crio --version
	I1213 10:36:08.358630  947325 ssh_runner.go:195] Run: crio --version
	I1213 10:36:08.393160  947325 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1213 10:36:08.396025  947325 cli_runner.go:164] Run: docker network inspect functional-200955 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 10:36:08.412435  947325 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1213 10:36:08.419349  947325 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1213 10:36:08.422234  947325 kubeadm.go:884] updating cluster {Name:functional-200955 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-200955 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 10:36:08.422367  947325 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 10:36:08.422431  947325 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 10:36:08.457237  947325 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 10:36:08.457249  947325 crio.go:433] Images already preloaded, skipping extraction
	I1213 10:36:08.457306  947325 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 10:36:08.483246  947325 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 10:36:08.483258  947325 cache_images.go:86] Images are preloaded, skipping loading
	I1213 10:36:08.483264  947325 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1213 10:36:08.483360  947325 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-200955 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-200955 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 10:36:08.483446  947325 ssh_runner.go:195] Run: crio config
	I1213 10:36:08.545147  947325 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1213 10:36:08.545173  947325 cni.go:84] Creating CNI manager for ""
	I1213 10:36:08.545183  947325 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 10:36:08.545197  947325 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 10:36:08.545221  947325 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-200955 NodeName:functional-200955 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfig
Opts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 10:36:08.545347  947325 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-200955"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 10:36:08.545423  947325 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 10:36:08.553515  947325 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 10:36:08.553607  947325 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 10:36:08.561293  947325 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1213 10:36:08.574385  947325 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 10:36:08.587429  947325 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2071 bytes)
	I1213 10:36:08.600337  947325 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1213 10:36:08.603994  947325 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:36:08.714374  947325 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 10:36:08.729978  947325 certs.go:69] Setting up /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955 for IP: 192.168.49.2
	I1213 10:36:08.729989  947325 certs.go:195] generating shared ca certs ...
	I1213 10:36:08.730004  947325 certs.go:227] acquiring lock for ca certs: {Name:mk8a4f8a0a31c02fdf751ce601bdbbea6f5a03e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:36:08.730137  947325 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22128-904040/.minikube/ca.key
	I1213 10:36:08.730179  947325 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22128-904040/.minikube/proxy-client-ca.key
	I1213 10:36:08.730184  947325 certs.go:257] generating profile certs ...
	I1213 10:36:08.730263  947325 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/client.key
	I1213 10:36:08.730310  947325 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/apiserver.key.8da389ed
	I1213 10:36:08.730347  947325 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/proxy-client.key
	I1213 10:36:08.730463  947325 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/907484.pem (1338 bytes)
	W1213 10:36:08.730496  947325 certs.go:480] ignoring /home/jenkins/minikube-integration/22128-904040/.minikube/certs/907484_empty.pem, impossibly tiny 0 bytes
	I1213 10:36:08.730503  947325 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 10:36:08.730557  947325 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca.pem (1082 bytes)
	I1213 10:36:08.730581  947325 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/cert.pem (1123 bytes)
	I1213 10:36:08.730604  947325 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/key.pem (1675 bytes)
	I1213 10:36:08.730645  947325 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-904040/.minikube/files/etc/ssl/certs/9074842.pem (1708 bytes)
	I1213 10:36:08.731237  947325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 10:36:08.752034  947325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 10:36:08.773437  947325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 10:36:08.794430  947325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 10:36:08.812223  947325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 10:36:08.829741  947325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 10:36:08.846903  947325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 10:36:08.865036  947325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 10:36:08.883435  947325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 10:36:08.901321  947325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/certs/907484.pem --> /usr/share/ca-certificates/907484.pem (1338 bytes)
	I1213 10:36:08.919555  947325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/files/etc/ssl/certs/9074842.pem --> /usr/share/ca-certificates/9074842.pem (1708 bytes)
	I1213 10:36:08.937123  947325 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 10:36:08.950079  947325 ssh_runner.go:195] Run: openssl version
	I1213 10:36:08.956456  947325 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9074842.pem
	I1213 10:36:08.964062  947325 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9074842.pem /etc/ssl/certs/9074842.pem
	I1213 10:36:08.971445  947325 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9074842.pem
	I1213 10:36:08.975220  947325 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 10:21 /usr/share/ca-certificates/9074842.pem
	I1213 10:36:08.975278  947325 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9074842.pem
	I1213 10:36:09.016546  947325 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 10:36:09.024284  947325 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:36:09.031776  947325 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 10:36:09.039308  947325 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:36:09.042991  947325 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 10:11 /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:36:09.043047  947325 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:36:09.084141  947325 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 10:36:09.091531  947325 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/907484.pem
	I1213 10:36:09.098770  947325 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/907484.pem /etc/ssl/certs/907484.pem
	I1213 10:36:09.106212  947325 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/907484.pem
	I1213 10:36:09.109989  947325 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 10:21 /usr/share/ca-certificates/907484.pem
	I1213 10:36:09.110044  947325 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/907484.pem
	I1213 10:36:09.153254  947325 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 10:36:09.160715  947325 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 10:36:09.164506  947325 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 10:36:09.205710  947325 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 10:36:09.247436  947325 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 10:36:09.288348  947325 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 10:36:09.331611  947325 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 10:36:09.374582  947325 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 10:36:09.417486  947325 kubeadm.go:401] StartCluster: {Name:functional-200955 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-200955 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:36:09.417589  947325 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 10:36:09.417682  947325 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 10:36:09.449632  947325 cri.go:89] found id: ""
	I1213 10:36:09.449706  947325 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 10:36:09.457511  947325 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 10:36:09.457521  947325 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 10:36:09.457596  947325 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 10:36:09.465280  947325 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 10:36:09.465840  947325 kubeconfig.go:125] found "functional-200955" server: "https://192.168.49.2:8441"
	I1213 10:36:09.467296  947325 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 10:36:09.475528  947325 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-13 10:21:33.398300096 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-13 10:36:08.597035311 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1213 10:36:09.475546  947325 kubeadm.go:1161] stopping kube-system containers ...
	I1213 10:36:09.475557  947325 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1213 10:36:09.475616  947325 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 10:36:09.507924  947325 cri.go:89] found id: ""
	I1213 10:36:09.508000  947325 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1213 10:36:09.528470  947325 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 10:36:09.536474  947325 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Dec 13 10:25 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5640 Dec 13 10:25 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec 13 10:25 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec 13 10:25 /etc/kubernetes/scheduler.conf
	
	I1213 10:36:09.536539  947325 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1213 10:36:09.544588  947325 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1213 10:36:09.552476  947325 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1213 10:36:09.552532  947325 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 10:36:09.560285  947325 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1213 10:36:09.567834  947325 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1213 10:36:09.567887  947325 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 10:36:09.575592  947325 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1213 10:36:09.583902  947325 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1213 10:36:09.583961  947325 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 10:36:09.591566  947325 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 10:36:09.599534  947325 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 10:36:09.647986  947325 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 10:36:11.096705  947325 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.44869318s)
	I1213 10:36:11.096768  947325 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1213 10:36:11.325396  947325 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 10:36:11.390971  947325 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1213 10:36:11.438539  947325 api_server.go:52] waiting for apiserver process to appear ...
	I1213 10:36:11.438613  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:11.939787  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:12.439662  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:12.939059  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:13.439009  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:13.939132  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:14.438804  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:14.939015  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:15.439388  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:15.939371  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:16.439364  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:16.939242  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:17.438810  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:17.938842  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:18.439574  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:18.939403  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:19.438808  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:19.938978  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:20.438838  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:20.938801  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:21.439702  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:21.938786  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:22.438810  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:22.938804  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:23.438760  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:23.939498  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:24.438837  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:24.939362  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:25.439492  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:25.939539  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:26.439316  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:26.939385  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:27.438813  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:27.938714  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:28.439704  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:28.938715  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:29.438702  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:29.938870  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:30.439316  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:30.939369  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:31.438789  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:31.938728  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:32.439400  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:32.938824  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:33.438805  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:33.938821  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:34.439650  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:34.939567  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:35.439266  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:35.938806  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:36.439740  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:36.938910  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:37.439180  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:37.939269  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:38.439067  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:38.938814  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:39.438987  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:39.939068  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:40.439485  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:40.939755  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:41.439569  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:41.939359  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:42.438799  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:42.939523  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:43.438794  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:43.939463  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:44.439348  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:44.938832  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:45.439512  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:45.939368  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:46.439415  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:46.938802  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:47.439348  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:47.938774  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:48.439499  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:48.938765  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:49.438815  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:49.939489  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:50.439425  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:50.938961  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:51.438899  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:51.938980  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:52.438768  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:52.939488  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:53.438802  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:53.938784  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:54.439567  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:54.939001  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:55.439034  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:55.939017  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:56.438830  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:56.938984  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:57.438956  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:57.939727  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:58.439422  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:58.938982  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:59.438715  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:59.938800  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:00.439480  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:00.939557  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:01.439633  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:01.938752  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:02.438782  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:02.939460  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:03.439509  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:03.939666  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:04.438756  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:04.938745  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:05.438791  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:05.938960  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:06.439641  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:06.939760  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:07.438893  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:07.939464  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:08.438808  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:08.938829  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:09.438860  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:09.939094  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:10.439786  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:10.939776  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:11.439694  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:37:11.439774  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:37:11.465379  947325 cri.go:89] found id: ""
	I1213 10:37:11.465394  947325 logs.go:282] 0 containers: []
	W1213 10:37:11.465401  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:37:11.465406  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:37:11.465463  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:37:11.498722  947325 cri.go:89] found id: ""
	I1213 10:37:11.498736  947325 logs.go:282] 0 containers: []
	W1213 10:37:11.498744  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:37:11.498749  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:37:11.498808  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:37:11.528435  947325 cri.go:89] found id: ""
	I1213 10:37:11.528450  947325 logs.go:282] 0 containers: []
	W1213 10:37:11.528456  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:37:11.528461  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:37:11.528520  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:37:11.556412  947325 cri.go:89] found id: ""
	I1213 10:37:11.556428  947325 logs.go:282] 0 containers: []
	W1213 10:37:11.556435  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:37:11.556439  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:37:11.556495  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:37:11.582029  947325 cri.go:89] found id: ""
	I1213 10:37:11.582043  947325 logs.go:282] 0 containers: []
	W1213 10:37:11.582050  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:37:11.582055  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:37:11.582111  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:37:11.606900  947325 cri.go:89] found id: ""
	I1213 10:37:11.606914  947325 logs.go:282] 0 containers: []
	W1213 10:37:11.606921  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:37:11.606926  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:37:11.606995  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:37:11.631834  947325 cri.go:89] found id: ""
	I1213 10:37:11.631848  947325 logs.go:282] 0 containers: []
	W1213 10:37:11.631855  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:37:11.631863  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:37:11.631873  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:37:11.696990  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:37:11.697011  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:37:11.711905  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:37:11.711923  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:37:11.780498  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:37:11.772620   10987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:11.773404   10987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:11.774929   10987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:11.775464   10987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:11.776559   10987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:37:11.772620   10987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:11.773404   10987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:11.774929   10987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:11.775464   10987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:11.776559   10987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:37:11.780514  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:37:11.780525  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:37:11.849149  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:37:11.849169  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:37:14.380275  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:14.390300  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:37:14.390376  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:37:14.422358  947325 cri.go:89] found id: ""
	I1213 10:37:14.422408  947325 logs.go:282] 0 containers: []
	W1213 10:37:14.422434  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:37:14.422439  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:37:14.422577  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:37:14.449364  947325 cri.go:89] found id: ""
	I1213 10:37:14.449379  947325 logs.go:282] 0 containers: []
	W1213 10:37:14.449386  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:37:14.449391  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:37:14.449448  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:37:14.478529  947325 cri.go:89] found id: ""
	I1213 10:37:14.478543  947325 logs.go:282] 0 containers: []
	W1213 10:37:14.478550  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:37:14.478555  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:37:14.478612  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:37:14.521652  947325 cri.go:89] found id: ""
	I1213 10:37:14.521666  947325 logs.go:282] 0 containers: []
	W1213 10:37:14.521673  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:37:14.521678  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:37:14.521736  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:37:14.558521  947325 cri.go:89] found id: ""
	I1213 10:37:14.558535  947325 logs.go:282] 0 containers: []
	W1213 10:37:14.558542  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:37:14.558547  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:37:14.558605  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:37:14.582435  947325 cri.go:89] found id: ""
	I1213 10:37:14.582448  947325 logs.go:282] 0 containers: []
	W1213 10:37:14.582455  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:37:14.582461  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:37:14.582518  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:37:14.607776  947325 cri.go:89] found id: ""
	I1213 10:37:14.607791  947325 logs.go:282] 0 containers: []
	W1213 10:37:14.607799  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:37:14.607807  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:37:14.607816  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:37:14.673008  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:37:14.673028  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:37:14.688569  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:37:14.688585  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:37:14.753510  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:37:14.744939   11090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:14.745653   11090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:14.747326   11090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:14.747936   11090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:14.749524   11090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:37:14.744939   11090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:14.745653   11090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:14.747326   11090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:14.747936   11090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:14.749524   11090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:37:14.753524  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:37:14.753556  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:37:14.820848  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:37:14.820868  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:37:17.353563  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:17.363824  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:37:17.363887  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:37:17.393249  947325 cri.go:89] found id: ""
	I1213 10:37:17.393263  947325 logs.go:282] 0 containers: []
	W1213 10:37:17.393271  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:37:17.393275  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:37:17.393334  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:37:17.421143  947325 cri.go:89] found id: ""
	I1213 10:37:17.421157  947325 logs.go:282] 0 containers: []
	W1213 10:37:17.421164  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:37:17.421169  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:37:17.421226  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:37:17.445347  947325 cri.go:89] found id: ""
	I1213 10:37:17.445361  947325 logs.go:282] 0 containers: []
	W1213 10:37:17.445368  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:37:17.445372  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:37:17.445428  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:37:17.474380  947325 cri.go:89] found id: ""
	I1213 10:37:17.474406  947325 logs.go:282] 0 containers: []
	W1213 10:37:17.474413  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:37:17.474419  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:37:17.474502  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:37:17.510146  947325 cri.go:89] found id: ""
	I1213 10:37:17.510160  947325 logs.go:282] 0 containers: []
	W1213 10:37:17.510167  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:37:17.510172  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:37:17.510228  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:37:17.544872  947325 cri.go:89] found id: ""
	I1213 10:37:17.544897  947325 logs.go:282] 0 containers: []
	W1213 10:37:17.544911  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:37:17.544917  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:37:17.544987  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:37:17.570527  947325 cri.go:89] found id: ""
	I1213 10:37:17.570542  947325 logs.go:282] 0 containers: []
	W1213 10:37:17.570549  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:37:17.570556  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:37:17.570567  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:37:17.634904  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:37:17.634924  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:37:17.649198  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:37:17.649216  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:37:17.710891  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:37:17.702777   11196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:17.703254   11196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:17.704864   11196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:17.705195   11196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:17.706621   11196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:37:17.702777   11196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:17.703254   11196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:17.704864   11196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:17.705195   11196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:17.706621   11196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:37:17.710910  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:37:17.710921  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:37:17.779540  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:37:17.779561  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:37:20.315323  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:20.326110  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:37:20.326185  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:37:20.351285  947325 cri.go:89] found id: ""
	I1213 10:37:20.351299  947325 logs.go:282] 0 containers: []
	W1213 10:37:20.351307  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:37:20.351312  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:37:20.351381  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:37:20.377322  947325 cri.go:89] found id: ""
	I1213 10:37:20.377335  947325 logs.go:282] 0 containers: []
	W1213 10:37:20.377343  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:37:20.377352  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:37:20.377413  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:37:20.403676  947325 cri.go:89] found id: ""
	I1213 10:37:20.403691  947325 logs.go:282] 0 containers: []
	W1213 10:37:20.403698  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:37:20.403704  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:37:20.403766  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:37:20.433713  947325 cri.go:89] found id: ""
	I1213 10:37:20.433736  947325 logs.go:282] 0 containers: []
	W1213 10:37:20.433744  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:37:20.433749  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:37:20.433809  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:37:20.459243  947325 cri.go:89] found id: ""
	I1213 10:37:20.459258  947325 logs.go:282] 0 containers: []
	W1213 10:37:20.459265  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:37:20.459270  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:37:20.459328  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:37:20.505295  947325 cri.go:89] found id: ""
	I1213 10:37:20.505310  947325 logs.go:282] 0 containers: []
	W1213 10:37:20.505317  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:37:20.505322  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:37:20.505382  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:37:20.534487  947325 cri.go:89] found id: ""
	I1213 10:37:20.534502  947325 logs.go:282] 0 containers: []
	W1213 10:37:20.534510  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:37:20.534518  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:37:20.534529  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:37:20.562816  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:37:20.562833  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:37:20.626774  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:37:20.626798  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:37:20.642510  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:37:20.642526  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:37:20.716150  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:37:20.707015   11315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:20.707754   11315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:20.709473   11315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:20.710077   11315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:20.711566   11315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:37:20.707015   11315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:20.707754   11315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:20.709473   11315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:20.710077   11315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:20.711566   11315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:37:20.716164  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:37:20.716176  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:37:23.288286  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:23.298705  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:37:23.298766  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:37:23.333024  947325 cri.go:89] found id: ""
	I1213 10:37:23.333038  947325 logs.go:282] 0 containers: []
	W1213 10:37:23.333046  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:37:23.333051  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:37:23.333115  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:37:23.358903  947325 cri.go:89] found id: ""
	I1213 10:37:23.358916  947325 logs.go:282] 0 containers: []
	W1213 10:37:23.358924  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:37:23.358929  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:37:23.358989  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:37:23.384787  947325 cri.go:89] found id: ""
	I1213 10:37:23.384801  947325 logs.go:282] 0 containers: []
	W1213 10:37:23.384808  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:37:23.384812  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:37:23.384871  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:37:23.410002  947325 cri.go:89] found id: ""
	I1213 10:37:23.410036  947325 logs.go:282] 0 containers: []
	W1213 10:37:23.410061  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:37:23.410086  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:37:23.410150  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:37:23.434837  947325 cri.go:89] found id: ""
	I1213 10:37:23.434865  947325 logs.go:282] 0 containers: []
	W1213 10:37:23.434872  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:37:23.434878  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:37:23.434945  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:37:23.464375  947325 cri.go:89] found id: ""
	I1213 10:37:23.464389  947325 logs.go:282] 0 containers: []
	W1213 10:37:23.464396  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:37:23.464402  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:37:23.464472  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:37:23.506074  947325 cri.go:89] found id: ""
	I1213 10:37:23.506089  947325 logs.go:282] 0 containers: []
	W1213 10:37:23.506097  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:37:23.506104  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:37:23.506116  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:37:23.589169  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:37:23.589191  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:37:23.619461  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:37:23.619477  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:37:23.688698  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:37:23.688720  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:37:23.703620  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:37:23.703637  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:37:23.771897  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:37:23.763311   11423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:23.763984   11423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:23.765659   11423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:23.766138   11423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:23.767919   11423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:37:23.763311   11423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:23.763984   11423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:23.765659   11423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:23.766138   11423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:23.767919   11423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:37:26.272169  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:26.282101  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:37:26.282172  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:37:26.308055  947325 cri.go:89] found id: ""
	I1213 10:37:26.308071  947325 logs.go:282] 0 containers: []
	W1213 10:37:26.308078  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:37:26.308086  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:37:26.308147  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:37:26.334700  947325 cri.go:89] found id: ""
	I1213 10:37:26.334722  947325 logs.go:282] 0 containers: []
	W1213 10:37:26.334729  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:37:26.334735  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:37:26.334799  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:37:26.360726  947325 cri.go:89] found id: ""
	I1213 10:37:26.360749  947325 logs.go:282] 0 containers: []
	W1213 10:37:26.360758  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:37:26.360763  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:37:26.360830  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:37:26.385135  947325 cri.go:89] found id: ""
	I1213 10:37:26.385149  947325 logs.go:282] 0 containers: []
	W1213 10:37:26.385157  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:37:26.385162  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:37:26.385233  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:37:26.412837  947325 cri.go:89] found id: ""
	I1213 10:37:26.412851  947325 logs.go:282] 0 containers: []
	W1213 10:37:26.412858  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:37:26.412863  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:37:26.412942  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:37:26.437812  947325 cri.go:89] found id: ""
	I1213 10:37:26.437827  947325 logs.go:282] 0 containers: []
	W1213 10:37:26.437834  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:37:26.437839  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:37:26.437900  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:37:26.463570  947325 cri.go:89] found id: ""
	I1213 10:37:26.463584  947325 logs.go:282] 0 containers: []
	W1213 10:37:26.463592  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:37:26.463600  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:37:26.463611  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:37:26.534802  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:37:26.534823  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:37:26.550643  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:37:26.550658  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:37:26.612829  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:37:26.605210   11516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:26.605795   11516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:26.606999   11516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:26.607456   11516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:26.609002   11516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:37:26.605210   11516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:26.605795   11516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:26.606999   11516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:26.607456   11516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:26.609002   11516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:37:26.612839  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:37:26.612849  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:37:26.681461  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:37:26.681480  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:37:29.210709  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:29.221193  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:37:29.221255  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:37:29.249274  947325 cri.go:89] found id: ""
	I1213 10:37:29.249289  947325 logs.go:282] 0 containers: []
	W1213 10:37:29.249297  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:37:29.249301  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:37:29.249369  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:37:29.276685  947325 cri.go:89] found id: ""
	I1213 10:37:29.276709  947325 logs.go:282] 0 containers: []
	W1213 10:37:29.276718  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:37:29.276723  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:37:29.276788  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:37:29.303267  947325 cri.go:89] found id: ""
	I1213 10:37:29.303281  947325 logs.go:282] 0 containers: []
	W1213 10:37:29.303289  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:37:29.303294  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:37:29.303355  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:37:29.328158  947325 cri.go:89] found id: ""
	I1213 10:37:29.328173  947325 logs.go:282] 0 containers: []
	W1213 10:37:29.328180  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:37:29.328186  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:37:29.328244  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:37:29.355541  947325 cri.go:89] found id: ""
	I1213 10:37:29.355556  947325 logs.go:282] 0 containers: []
	W1213 10:37:29.355565  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:37:29.355570  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:37:29.355627  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:37:29.381411  947325 cri.go:89] found id: ""
	I1213 10:37:29.381426  947325 logs.go:282] 0 containers: []
	W1213 10:37:29.381433  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:37:29.381439  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:37:29.381501  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:37:29.407073  947325 cri.go:89] found id: ""
	I1213 10:37:29.407088  947325 logs.go:282] 0 containers: []
	W1213 10:37:29.407094  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:37:29.407101  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:37:29.407113  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:37:29.422330  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:37:29.422347  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:37:29.498825  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:37:29.490027   11614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:29.491071   11614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:29.492766   11614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:29.493102   11614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:29.494590   11614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:37:29.490027   11614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:29.491071   11614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:29.492766   11614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:29.493102   11614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:29.494590   11614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:37:29.498837  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:37:29.498850  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:37:29.575835  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:37:29.575856  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:37:29.607770  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:37:29.607790  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:37:32.181248  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:32.191812  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:37:32.191876  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:37:32.217211  947325 cri.go:89] found id: ""
	I1213 10:37:32.217225  947325 logs.go:282] 0 containers: []
	W1213 10:37:32.217233  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:37:32.217238  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:37:32.217293  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:37:32.243073  947325 cri.go:89] found id: ""
	I1213 10:37:32.243087  947325 logs.go:282] 0 containers: []
	W1213 10:37:32.243095  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:37:32.243100  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:37:32.243172  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:37:32.272999  947325 cri.go:89] found id: ""
	I1213 10:37:32.273013  947325 logs.go:282] 0 containers: []
	W1213 10:37:32.273020  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:37:32.273025  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:37:32.273084  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:37:32.299079  947325 cri.go:89] found id: ""
	I1213 10:37:32.299092  947325 logs.go:282] 0 containers: []
	W1213 10:37:32.299099  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:37:32.299104  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:37:32.299161  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:37:32.328707  947325 cri.go:89] found id: ""
	I1213 10:37:32.328722  947325 logs.go:282] 0 containers: []
	W1213 10:37:32.328729  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:37:32.328734  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:37:32.328795  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:37:32.354361  947325 cri.go:89] found id: ""
	I1213 10:37:32.354375  947325 logs.go:282] 0 containers: []
	W1213 10:37:32.354382  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:37:32.354388  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:37:32.354448  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:37:32.380069  947325 cri.go:89] found id: ""
	I1213 10:37:32.380083  947325 logs.go:282] 0 containers: []
	W1213 10:37:32.380089  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:37:32.380096  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:37:32.380107  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:37:32.445012  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:37:32.445036  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:37:32.460199  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:37:32.460223  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:37:32.549445  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:37:32.540188   11723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:32.540702   11723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:32.542738   11723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:32.543594   11723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:32.545344   11723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:37:32.540188   11723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:32.540702   11723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:32.542738   11723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:32.543594   11723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:32.545344   11723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:37:32.549456  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:37:32.549467  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:37:32.617595  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:37:32.617617  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:37:35.148911  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:35.159421  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:37:35.159482  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:37:35.186963  947325 cri.go:89] found id: ""
	I1213 10:37:35.186976  947325 logs.go:282] 0 containers: []
	W1213 10:37:35.186984  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:37:35.186989  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:37:35.187046  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:37:35.216128  947325 cri.go:89] found id: ""
	I1213 10:37:35.216142  947325 logs.go:282] 0 containers: []
	W1213 10:37:35.216153  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:37:35.216158  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:37:35.216217  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:37:35.244930  947325 cri.go:89] found id: ""
	I1213 10:37:35.244945  947325 logs.go:282] 0 containers: []
	W1213 10:37:35.244953  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:37:35.244958  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:37:35.245020  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:37:35.270186  947325 cri.go:89] found id: ""
	I1213 10:37:35.270200  947325 logs.go:282] 0 containers: []
	W1213 10:37:35.270207  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:37:35.270212  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:37:35.270268  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:37:35.296166  947325 cri.go:89] found id: ""
	I1213 10:37:35.296180  947325 logs.go:282] 0 containers: []
	W1213 10:37:35.296187  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:37:35.296192  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:37:35.296249  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:37:35.325322  947325 cri.go:89] found id: ""
	I1213 10:37:35.325337  947325 logs.go:282] 0 containers: []
	W1213 10:37:35.325344  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:37:35.325349  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:37:35.325411  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:37:35.350870  947325 cri.go:89] found id: ""
	I1213 10:37:35.350884  947325 logs.go:282] 0 containers: []
	W1213 10:37:35.350892  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:37:35.350900  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:37:35.350911  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:37:35.365840  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:37:35.365857  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:37:35.428973  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:37:35.420649   11826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:35.421481   11826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:35.422989   11826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:35.423556   11826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:35.425084   11826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:37:35.420649   11826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:35.421481   11826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:35.422989   11826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:35.423556   11826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:35.425084   11826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:37:35.428993  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:37:35.429004  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:37:35.497503  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:37:35.497522  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:37:35.530732  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:37:35.530751  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:37:38.099975  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:38.110243  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:37:38.110306  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:37:38.140777  947325 cri.go:89] found id: ""
	I1213 10:37:38.140792  947325 logs.go:282] 0 containers: []
	W1213 10:37:38.140798  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:37:38.140804  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:37:38.140871  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:37:38.167186  947325 cri.go:89] found id: ""
	I1213 10:37:38.167200  947325 logs.go:282] 0 containers: []
	W1213 10:37:38.167207  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:37:38.167212  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:37:38.167276  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:37:38.193305  947325 cri.go:89] found id: ""
	I1213 10:37:38.193318  947325 logs.go:282] 0 containers: []
	W1213 10:37:38.193326  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:37:38.193331  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:37:38.193388  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:37:38.219451  947325 cri.go:89] found id: ""
	I1213 10:37:38.219464  947325 logs.go:282] 0 containers: []
	W1213 10:37:38.219472  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:37:38.219477  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:37:38.219542  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:37:38.249284  947325 cri.go:89] found id: ""
	I1213 10:37:38.249299  947325 logs.go:282] 0 containers: []
	W1213 10:37:38.249306  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:37:38.249311  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:37:38.249380  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:37:38.275451  947325 cri.go:89] found id: ""
	I1213 10:37:38.275464  947325 logs.go:282] 0 containers: []
	W1213 10:37:38.275471  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:37:38.275477  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:37:38.275538  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:37:38.300476  947325 cri.go:89] found id: ""
	I1213 10:37:38.300490  947325 logs.go:282] 0 containers: []
	W1213 10:37:38.300497  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:37:38.300504  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:37:38.300517  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:37:38.366681  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:37:38.366700  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:37:38.381405  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:37:38.381423  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:37:38.441215  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:37:38.434082   11932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:38.434551   11932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:38.435674   11932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:38.436023   11932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:38.437450   11932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:37:38.434082   11932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:38.434551   11932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:38.435674   11932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:38.436023   11932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:38.437450   11932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:37:38.441225  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:37:38.441236  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:37:38.508504  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:37:38.508525  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:37:41.051455  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:41.061451  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:37:41.061522  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:37:41.087312  947325 cri.go:89] found id: ""
	I1213 10:37:41.087331  947325 logs.go:282] 0 containers: []
	W1213 10:37:41.087338  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:37:41.087343  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:37:41.087416  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:37:41.116231  947325 cri.go:89] found id: ""
	I1213 10:37:41.116246  947325 logs.go:282] 0 containers: []
	W1213 10:37:41.116253  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:37:41.116258  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:37:41.116316  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:37:41.147429  947325 cri.go:89] found id: ""
	I1213 10:37:41.147444  947325 logs.go:282] 0 containers: []
	W1213 10:37:41.147451  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:37:41.147457  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:37:41.147516  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:37:41.176551  947325 cri.go:89] found id: ""
	I1213 10:37:41.176565  947325 logs.go:282] 0 containers: []
	W1213 10:37:41.176573  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:37:41.176578  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:37:41.176634  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:37:41.204132  947325 cri.go:89] found id: ""
	I1213 10:37:41.204146  947325 logs.go:282] 0 containers: []
	W1213 10:37:41.204154  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:37:41.204159  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:37:41.204223  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:37:41.230785  947325 cri.go:89] found id: ""
	I1213 10:37:41.230799  947325 logs.go:282] 0 containers: []
	W1213 10:37:41.230807  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:37:41.230813  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:37:41.230880  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:37:41.256411  947325 cri.go:89] found id: ""
	I1213 10:37:41.256425  947325 logs.go:282] 0 containers: []
	W1213 10:37:41.256433  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:37:41.256440  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:37:41.256451  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:37:41.285617  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:37:41.285636  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:37:41.356895  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:37:41.356914  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:37:41.371698  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:37:41.371714  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:37:41.436289  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:37:41.427612   12049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:41.428221   12049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:41.430007   12049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:41.430584   12049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:41.432351   12049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:37:41.427612   12049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:41.428221   12049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:41.430007   12049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:41.430584   12049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:41.432351   12049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:37:41.436299  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:37:41.436309  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:37:44.006670  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:44.021718  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:37:44.021788  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:37:44.048534  947325 cri.go:89] found id: ""
	I1213 10:37:44.048549  947325 logs.go:282] 0 containers: []
	W1213 10:37:44.048565  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:37:44.048571  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:37:44.048674  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:37:44.079425  947325 cri.go:89] found id: ""
	I1213 10:37:44.079439  947325 logs.go:282] 0 containers: []
	W1213 10:37:44.079446  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:37:44.079451  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:37:44.079523  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:37:44.106317  947325 cri.go:89] found id: ""
	I1213 10:37:44.106334  947325 logs.go:282] 0 containers: []
	W1213 10:37:44.106342  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:37:44.106348  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:37:44.106420  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:37:44.132520  947325 cri.go:89] found id: ""
	I1213 10:37:44.132534  947325 logs.go:282] 0 containers: []
	W1213 10:37:44.132553  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:37:44.132558  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:37:44.132628  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:37:44.161205  947325 cri.go:89] found id: ""
	I1213 10:37:44.161219  947325 logs.go:282] 0 containers: []
	W1213 10:37:44.161226  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:37:44.161231  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:37:44.161291  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:37:44.187876  947325 cri.go:89] found id: ""
	I1213 10:37:44.187890  947325 logs.go:282] 0 containers: []
	W1213 10:37:44.187898  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:37:44.187903  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:37:44.187961  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:37:44.215854  947325 cri.go:89] found id: ""
	I1213 10:37:44.215869  947325 logs.go:282] 0 containers: []
	W1213 10:37:44.215876  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:37:44.215884  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:37:44.215894  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:37:44.284854  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:37:44.276025   12136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:44.276798   12136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:44.278330   12136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:44.278909   12136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:44.280546   12136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:37:44.276025   12136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:44.276798   12136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:44.278330   12136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:44.278909   12136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:44.280546   12136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:37:44.284866  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:37:44.284876  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:37:44.355349  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:37:44.355373  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:37:44.384733  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:37:44.384752  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:37:44.453769  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:37:44.453788  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:37:46.969736  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:46.979972  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:37:46.980038  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:37:47.007061  947325 cri.go:89] found id: ""
	I1213 10:37:47.007075  947325 logs.go:282] 0 containers: []
	W1213 10:37:47.007082  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:37:47.007087  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:37:47.007146  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:37:47.036818  947325 cri.go:89] found id: ""
	I1213 10:37:47.036832  947325 logs.go:282] 0 containers: []
	W1213 10:37:47.036858  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:37:47.036863  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:37:47.036921  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:37:47.061328  947325 cri.go:89] found id: ""
	I1213 10:37:47.061342  947325 logs.go:282] 0 containers: []
	W1213 10:37:47.061349  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:37:47.061355  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:37:47.061415  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:37:47.089017  947325 cri.go:89] found id: ""
	I1213 10:37:47.089032  947325 logs.go:282] 0 containers: []
	W1213 10:37:47.089039  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:37:47.089044  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:37:47.089103  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:37:47.114790  947325 cri.go:89] found id: ""
	I1213 10:37:47.114803  947325 logs.go:282] 0 containers: []
	W1213 10:37:47.114810  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:37:47.114817  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:37:47.114877  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:37:47.139554  947325 cri.go:89] found id: ""
	I1213 10:37:47.139575  947325 logs.go:282] 0 containers: []
	W1213 10:37:47.139583  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:37:47.139589  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:37:47.139654  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:37:47.165228  947325 cri.go:89] found id: ""
	I1213 10:37:47.165241  947325 logs.go:282] 0 containers: []
	W1213 10:37:47.165248  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:37:47.165256  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:37:47.165266  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:37:47.232293  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:37:47.232313  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:37:47.261718  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:37:47.261736  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:37:47.331592  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:37:47.331613  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:37:47.345881  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:37:47.345897  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:37:47.412948  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:37:47.404477   12259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:47.405216   12259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:47.406839   12259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:47.407332   12259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:47.409008   12259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:37:47.404477   12259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:47.405216   12259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:47.406839   12259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:47.407332   12259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:47.409008   12259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:37:49.913659  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:49.923942  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:37:49.924005  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:37:49.951850  947325 cri.go:89] found id: ""
	I1213 10:37:49.951863  947325 logs.go:282] 0 containers: []
	W1213 10:37:49.951871  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:37:49.951876  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:37:49.951936  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:37:49.976949  947325 cri.go:89] found id: ""
	I1213 10:37:49.976963  947325 logs.go:282] 0 containers: []
	W1213 10:37:49.976971  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:37:49.976976  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:37:49.977034  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:37:50.020670  947325 cri.go:89] found id: ""
	I1213 10:37:50.020686  947325 logs.go:282] 0 containers: []
	W1213 10:37:50.020693  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:37:50.020698  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:37:50.020779  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:37:50.048299  947325 cri.go:89] found id: ""
	I1213 10:37:50.048316  947325 logs.go:282] 0 containers: []
	W1213 10:37:50.048323  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:37:50.048328  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:37:50.048397  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:37:50.075060  947325 cri.go:89] found id: ""
	I1213 10:37:50.075074  947325 logs.go:282] 0 containers: []
	W1213 10:37:50.075081  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:37:50.075087  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:37:50.075148  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:37:50.104579  947325 cri.go:89] found id: ""
	I1213 10:37:50.104593  947325 logs.go:282] 0 containers: []
	W1213 10:37:50.104601  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:37:50.104607  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:37:50.104666  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:37:50.132679  947325 cri.go:89] found id: ""
	I1213 10:37:50.132693  947325 logs.go:282] 0 containers: []
	W1213 10:37:50.132701  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:37:50.132714  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:37:50.132725  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:37:50.197209  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:37:50.187857   12346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:50.188686   12346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:50.190498   12346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:50.191212   12346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:50.192792   12346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:37:50.187857   12346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:50.188686   12346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:50.190498   12346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:50.191212   12346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:50.192792   12346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:37:50.197219  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:37:50.197230  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:37:50.267157  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:37:50.267176  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:37:50.297061  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:37:50.297077  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:37:50.363929  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:37:50.363950  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:37:52.879245  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:52.889673  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:37:52.889741  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:37:52.914746  947325 cri.go:89] found id: ""
	I1213 10:37:52.914768  947325 logs.go:282] 0 containers: []
	W1213 10:37:52.914776  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:37:52.914781  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:37:52.914845  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:37:52.941523  947325 cri.go:89] found id: ""
	I1213 10:37:52.941554  947325 logs.go:282] 0 containers: []
	W1213 10:37:52.941562  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:37:52.941567  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:37:52.941623  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:37:52.967010  947325 cri.go:89] found id: ""
	I1213 10:37:52.967027  947325 logs.go:282] 0 containers: []
	W1213 10:37:52.967035  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:37:52.967040  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:37:52.967141  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:37:52.992300  947325 cri.go:89] found id: ""
	I1213 10:37:52.992313  947325 logs.go:282] 0 containers: []
	W1213 10:37:52.992321  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:37:52.992326  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:37:52.992386  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:37:53.020045  947325 cri.go:89] found id: ""
	I1213 10:37:53.020058  947325 logs.go:282] 0 containers: []
	W1213 10:37:53.020074  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:37:53.020081  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:37:53.020140  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:37:53.053897  947325 cri.go:89] found id: ""
	I1213 10:37:53.053911  947325 logs.go:282] 0 containers: []
	W1213 10:37:53.053918  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:37:53.053923  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:37:53.053982  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:37:53.079867  947325 cri.go:89] found id: ""
	I1213 10:37:53.079882  947325 logs.go:282] 0 containers: []
	W1213 10:37:53.079890  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:37:53.079897  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:37:53.079908  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:37:53.144913  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:37:53.144932  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:37:53.159844  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:37:53.159861  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:37:53.226427  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:37:53.218433   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:53.219033   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:53.220531   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:53.221108   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:53.222548   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:37:53.218433   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:53.219033   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:53.220531   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:53.221108   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:53.222548   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:37:53.226436  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:37:53.226447  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:37:53.294490  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:37:53.294510  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:37:55.827710  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:55.837950  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:37:55.838028  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:37:55.863239  947325 cri.go:89] found id: ""
	I1213 10:37:55.863253  947325 logs.go:282] 0 containers: []
	W1213 10:37:55.863260  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:37:55.863265  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:37:55.863331  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:37:55.892876  947325 cri.go:89] found id: ""
	I1213 10:37:55.892890  947325 logs.go:282] 0 containers: []
	W1213 10:37:55.892897  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:37:55.892902  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:37:55.892962  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:37:55.919038  947325 cri.go:89] found id: ""
	I1213 10:37:55.919051  947325 logs.go:282] 0 containers: []
	W1213 10:37:55.919059  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:37:55.919064  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:37:55.919123  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:37:55.944982  947325 cri.go:89] found id: ""
	I1213 10:37:55.944997  947325 logs.go:282] 0 containers: []
	W1213 10:37:55.945004  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:37:55.945009  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:37:55.945066  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:37:55.974750  947325 cri.go:89] found id: ""
	I1213 10:37:55.974764  947325 logs.go:282] 0 containers: []
	W1213 10:37:55.974771  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:37:55.974776  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:37:55.974836  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:37:56.006337  947325 cri.go:89] found id: ""
	I1213 10:37:56.006352  947325 logs.go:282] 0 containers: []
	W1213 10:37:56.006360  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:37:56.006365  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:37:56.006429  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:37:56.033183  947325 cri.go:89] found id: ""
	I1213 10:37:56.033199  947325 logs.go:282] 0 containers: []
	W1213 10:37:56.033206  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:37:56.033214  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:37:56.033225  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:37:56.098781  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:37:56.098801  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:37:56.113910  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:37:56.113933  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:37:56.179999  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:37:56.172125   12565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:56.172668   12565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:56.174227   12565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:56.174819   12565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:56.176271   12565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:37:56.172125   12565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:56.172668   12565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:56.174227   12565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:56.174819   12565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:56.176271   12565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:37:56.180009  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:37:56.180020  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:37:56.248249  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:37:56.248271  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:37:58.777669  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:58.788383  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:37:58.788443  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:37:58.815846  947325 cri.go:89] found id: ""
	I1213 10:37:58.815861  947325 logs.go:282] 0 containers: []
	W1213 10:37:58.815868  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:37:58.815873  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:37:58.815933  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:37:58.845912  947325 cri.go:89] found id: ""
	I1213 10:37:58.845926  947325 logs.go:282] 0 containers: []
	W1213 10:37:58.845933  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:37:58.845938  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:37:58.846003  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:37:58.870933  947325 cri.go:89] found id: ""
	I1213 10:37:58.870947  947325 logs.go:282] 0 containers: []
	W1213 10:37:58.870954  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:37:58.870959  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:37:58.871017  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:37:58.900972  947325 cri.go:89] found id: ""
	I1213 10:37:58.900986  947325 logs.go:282] 0 containers: []
	W1213 10:37:58.900993  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:37:58.900998  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:37:58.901054  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:37:58.926234  947325 cri.go:89] found id: ""
	I1213 10:37:58.926257  947325 logs.go:282] 0 containers: []
	W1213 10:37:58.926266  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:37:58.926271  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:37:58.926338  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:37:58.951314  947325 cri.go:89] found id: ""
	I1213 10:37:58.951328  947325 logs.go:282] 0 containers: []
	W1213 10:37:58.951335  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:37:58.951340  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:37:58.951398  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:37:58.981974  947325 cri.go:89] found id: ""
	I1213 10:37:58.981989  947325 logs.go:282] 0 containers: []
	W1213 10:37:58.981996  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:37:58.982003  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:37:58.982014  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:37:59.047152  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:37:59.047172  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:37:59.062001  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:37:59.062019  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:37:59.127736  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:37:59.119615   12667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:59.120166   12667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:59.121736   12667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:59.122383   12667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:59.123935   12667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:37:59.119615   12667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:59.120166   12667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:59.121736   12667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:59.122383   12667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:59.123935   12667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:37:59.127748  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:37:59.127759  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:37:59.196288  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:37:59.196308  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:38:01.726269  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:01.738227  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:38:01.738290  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:38:01.765402  947325 cri.go:89] found id: ""
	I1213 10:38:01.765416  947325 logs.go:282] 0 containers: []
	W1213 10:38:01.765423  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:38:01.765428  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:38:01.765487  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:38:01.797073  947325 cri.go:89] found id: ""
	I1213 10:38:01.797087  947325 logs.go:282] 0 containers: []
	W1213 10:38:01.797094  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:38:01.797105  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:38:01.797165  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:38:01.822923  947325 cri.go:89] found id: ""
	I1213 10:38:01.822936  947325 logs.go:282] 0 containers: []
	W1213 10:38:01.822943  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:38:01.822948  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:38:01.823004  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:38:01.847458  947325 cri.go:89] found id: ""
	I1213 10:38:01.847472  947325 logs.go:282] 0 containers: []
	W1213 10:38:01.847479  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:38:01.847484  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:38:01.847542  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:38:01.876363  947325 cri.go:89] found id: ""
	I1213 10:38:01.876376  947325 logs.go:282] 0 containers: []
	W1213 10:38:01.876383  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:38:01.876388  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:38:01.876445  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:38:01.901894  947325 cri.go:89] found id: ""
	I1213 10:38:01.901908  947325 logs.go:282] 0 containers: []
	W1213 10:38:01.901915  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:38:01.901920  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:38:01.901977  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:38:01.927538  947325 cri.go:89] found id: ""
	I1213 10:38:01.927556  947325 logs.go:282] 0 containers: []
	W1213 10:38:01.927563  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:38:01.927571  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:38:01.927585  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:38:01.993043  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:38:01.993063  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:38:02.009861  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:38:02.009878  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:38:02.079070  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:38:02.070348   12772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:02.071182   12772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:02.072918   12772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:02.073701   12772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:02.074834   12772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:38:02.070348   12772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:02.071182   12772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:02.072918   12772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:02.073701   12772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:02.074834   12772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:38:02.079087  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:38:02.079097  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:38:02.150335  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:38:02.150355  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:38:04.680156  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:04.690471  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:38:04.690534  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:38:04.717027  947325 cri.go:89] found id: ""
	I1213 10:38:04.717042  947325 logs.go:282] 0 containers: []
	W1213 10:38:04.717049  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:38:04.717055  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:38:04.717116  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:38:04.751100  947325 cri.go:89] found id: ""
	I1213 10:38:04.751114  947325 logs.go:282] 0 containers: []
	W1213 10:38:04.751121  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:38:04.751126  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:38:04.751185  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:38:04.785118  947325 cri.go:89] found id: ""
	I1213 10:38:04.785133  947325 logs.go:282] 0 containers: []
	W1213 10:38:04.785140  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:38:04.785145  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:38:04.785206  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:38:04.811838  947325 cri.go:89] found id: ""
	I1213 10:38:04.811852  947325 logs.go:282] 0 containers: []
	W1213 10:38:04.811859  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:38:04.811864  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:38:04.811924  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:38:04.837476  947325 cri.go:89] found id: ""
	I1213 10:38:04.837489  947325 logs.go:282] 0 containers: []
	W1213 10:38:04.837497  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:38:04.837502  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:38:04.837589  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:38:04.863616  947325 cri.go:89] found id: ""
	I1213 10:38:04.863630  947325 logs.go:282] 0 containers: []
	W1213 10:38:04.863637  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:38:04.863642  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:38:04.864028  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:38:04.897282  947325 cri.go:89] found id: ""
	I1213 10:38:04.897297  947325 logs.go:282] 0 containers: []
	W1213 10:38:04.897304  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:38:04.897311  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:38:04.897322  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:38:04.970089  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:38:04.970112  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:38:04.998787  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:38:04.998808  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:38:05.071114  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:38:05.071136  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:38:05.086764  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:38:05.086780  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:38:05.152705  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:38:05.144665   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:05.145255   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:05.146845   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:05.147330   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:05.148849   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:38:05.144665   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:05.145255   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:05.146845   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:05.147330   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:05.148849   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:38:07.652961  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:07.663190  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:38:07.663256  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:38:07.687596  947325 cri.go:89] found id: ""
	I1213 10:38:07.687611  947325 logs.go:282] 0 containers: []
	W1213 10:38:07.687619  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:38:07.687624  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:38:07.687682  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:38:07.712358  947325 cri.go:89] found id: ""
	I1213 10:38:07.712372  947325 logs.go:282] 0 containers: []
	W1213 10:38:07.712379  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:38:07.712384  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:38:07.712443  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:38:07.747606  947325 cri.go:89] found id: ""
	I1213 10:38:07.747620  947325 logs.go:282] 0 containers: []
	W1213 10:38:07.747627  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:38:07.747632  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:38:07.747686  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:38:07.779928  947325 cri.go:89] found id: ""
	I1213 10:38:07.779942  947325 logs.go:282] 0 containers: []
	W1213 10:38:07.779949  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:38:07.779954  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:38:07.780010  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:38:07.809892  947325 cri.go:89] found id: ""
	I1213 10:38:07.809905  947325 logs.go:282] 0 containers: []
	W1213 10:38:07.809912  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:38:07.809917  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:38:07.809976  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:38:07.835954  947325 cri.go:89] found id: ""
	I1213 10:38:07.835969  947325 logs.go:282] 0 containers: []
	W1213 10:38:07.835977  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:38:07.835983  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:38:07.836045  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:38:07.863613  947325 cri.go:89] found id: ""
	I1213 10:38:07.863628  947325 logs.go:282] 0 containers: []
	W1213 10:38:07.863635  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:38:07.863643  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:38:07.863653  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:38:07.934015  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:38:07.934035  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:38:07.949065  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:38:07.949082  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:38:08.016099  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:38:08.006616   12984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:08.007565   12984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:08.009216   12984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:08.009606   12984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:08.011135   12984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:38:08.006616   12984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:08.007565   12984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:08.009216   12984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:08.009606   12984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:08.011135   12984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:38:08.016110  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:38:08.016120  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:38:08.086624  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:38:08.086643  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:38:10.620779  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:10.631455  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:38:10.631519  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:38:10.657004  947325 cri.go:89] found id: ""
	I1213 10:38:10.657018  947325 logs.go:282] 0 containers: []
	W1213 10:38:10.657025  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:38:10.657031  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:38:10.657091  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:38:10.682863  947325 cri.go:89] found id: ""
	I1213 10:38:10.682879  947325 logs.go:282] 0 containers: []
	W1213 10:38:10.682887  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:38:10.682892  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:38:10.682952  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:38:10.710656  947325 cri.go:89] found id: ""
	I1213 10:38:10.710671  947325 logs.go:282] 0 containers: []
	W1213 10:38:10.710678  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:38:10.710684  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:38:10.710744  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:38:10.751941  947325 cri.go:89] found id: ""
	I1213 10:38:10.751955  947325 logs.go:282] 0 containers: []
	W1213 10:38:10.751962  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:38:10.751967  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:38:10.752027  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:38:10.784379  947325 cri.go:89] found id: ""
	I1213 10:38:10.784393  947325 logs.go:282] 0 containers: []
	W1213 10:38:10.784400  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:38:10.784405  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:38:10.784462  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:38:10.812194  947325 cri.go:89] found id: ""
	I1213 10:38:10.812208  947325 logs.go:282] 0 containers: []
	W1213 10:38:10.812215  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:38:10.812220  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:38:10.812279  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:38:10.837693  947325 cri.go:89] found id: ""
	I1213 10:38:10.837706  947325 logs.go:282] 0 containers: []
	W1213 10:38:10.837714  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:38:10.837721  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:38:10.837732  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:38:10.903946  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:38:10.903965  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:38:10.918956  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:38:10.918972  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:38:10.991627  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:38:10.983406   13089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:10.984077   13089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:10.985359   13089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:10.985915   13089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:10.987398   13089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:38:10.983406   13089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:10.984077   13089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:10.985359   13089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:10.985915   13089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:10.987398   13089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:38:10.991638  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:38:10.991648  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:38:11.064139  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:38:11.064160  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:38:13.600555  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:13.610666  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:38:13.610728  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:38:13.635608  947325 cri.go:89] found id: ""
	I1213 10:38:13.635622  947325 logs.go:282] 0 containers: []
	W1213 10:38:13.635629  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:38:13.635635  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:38:13.635694  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:38:13.660494  947325 cri.go:89] found id: ""
	I1213 10:38:13.660509  947325 logs.go:282] 0 containers: []
	W1213 10:38:13.660516  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:38:13.660521  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:38:13.660580  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:38:13.686792  947325 cri.go:89] found id: ""
	I1213 10:38:13.686807  947325 logs.go:282] 0 containers: []
	W1213 10:38:13.686814  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:38:13.686820  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:38:13.686877  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:38:13.712337  947325 cri.go:89] found id: ""
	I1213 10:38:13.712351  947325 logs.go:282] 0 containers: []
	W1213 10:38:13.712358  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:38:13.712364  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:38:13.712421  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:38:13.751688  947325 cri.go:89] found id: ""
	I1213 10:38:13.751703  947325 logs.go:282] 0 containers: []
	W1213 10:38:13.751710  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:38:13.751716  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:38:13.751771  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:38:13.778873  947325 cri.go:89] found id: ""
	I1213 10:38:13.778886  947325 logs.go:282] 0 containers: []
	W1213 10:38:13.778893  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:38:13.778898  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:38:13.778955  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:38:13.808036  947325 cri.go:89] found id: ""
	I1213 10:38:13.808050  947325 logs.go:282] 0 containers: []
	W1213 10:38:13.808057  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:38:13.808065  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:38:13.808081  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:38:13.874152  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:38:13.864618   13192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:13.865871   13192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:13.866606   13192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:13.868278   13192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:13.868976   13192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:38:13.864618   13192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:13.865871   13192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:13.866606   13192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:13.868278   13192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:13.868976   13192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:38:13.874162  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:38:13.874173  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:38:13.943404  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:38:13.943424  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:38:13.971540  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:38:13.971557  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:38:14.040558  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:38:14.040581  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:38:16.556175  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:16.566366  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:38:16.566428  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:38:16.591757  947325 cri.go:89] found id: ""
	I1213 10:38:16.591772  947325 logs.go:282] 0 containers: []
	W1213 10:38:16.591779  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:38:16.591785  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:38:16.591842  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:38:16.617244  947325 cri.go:89] found id: ""
	I1213 10:38:16.617259  947325 logs.go:282] 0 containers: []
	W1213 10:38:16.617266  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:38:16.617271  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:38:16.617329  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:38:16.644168  947325 cri.go:89] found id: ""
	I1213 10:38:16.644182  947325 logs.go:282] 0 containers: []
	W1213 10:38:16.644189  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:38:16.644194  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:38:16.644253  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:38:16.673646  947325 cri.go:89] found id: ""
	I1213 10:38:16.673659  947325 logs.go:282] 0 containers: []
	W1213 10:38:16.673666  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:38:16.673671  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:38:16.673729  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:38:16.698771  947325 cri.go:89] found id: ""
	I1213 10:38:16.698785  947325 logs.go:282] 0 containers: []
	W1213 10:38:16.698793  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:38:16.698798  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:38:16.698857  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:38:16.726980  947325 cri.go:89] found id: ""
	I1213 10:38:16.726994  947325 logs.go:282] 0 containers: []
	W1213 10:38:16.727001  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:38:16.727006  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:38:16.727066  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:38:16.773642  947325 cri.go:89] found id: ""
	I1213 10:38:16.773657  947325 logs.go:282] 0 containers: []
	W1213 10:38:16.773665  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:38:16.773673  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:38:16.773685  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:38:16.807643  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:38:16.807660  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:38:16.874674  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:38:16.874698  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:38:16.890281  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:38:16.890299  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:38:16.958318  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:38:16.949056   13311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:16.950510   13311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:16.951914   13311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:16.952759   13311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:16.954416   13311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:38:16.949056   13311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:16.950510   13311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:16.951914   13311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:16.952759   13311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:16.954416   13311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:38:16.958330  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:38:16.958343  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:38:19.528319  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:19.539728  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:38:19.539789  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:38:19.571108  947325 cri.go:89] found id: ""
	I1213 10:38:19.571121  947325 logs.go:282] 0 containers: []
	W1213 10:38:19.571129  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:38:19.571134  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:38:19.571194  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:38:19.597765  947325 cri.go:89] found id: ""
	I1213 10:38:19.597779  947325 logs.go:282] 0 containers: []
	W1213 10:38:19.597787  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:38:19.597792  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:38:19.597853  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:38:19.623110  947325 cri.go:89] found id: ""
	I1213 10:38:19.623124  947325 logs.go:282] 0 containers: []
	W1213 10:38:19.623137  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:38:19.623142  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:38:19.623204  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:38:19.648553  947325 cri.go:89] found id: ""
	I1213 10:38:19.648568  947325 logs.go:282] 0 containers: []
	W1213 10:38:19.648575  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:38:19.648580  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:38:19.648652  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:38:19.674550  947325 cri.go:89] found id: ""
	I1213 10:38:19.674565  947325 logs.go:282] 0 containers: []
	W1213 10:38:19.674572  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:38:19.674577  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:38:19.674635  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:38:19.704458  947325 cri.go:89] found id: ""
	I1213 10:38:19.704473  947325 logs.go:282] 0 containers: []
	W1213 10:38:19.704480  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:38:19.704486  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:38:19.704560  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:38:19.742545  947325 cri.go:89] found id: ""
	I1213 10:38:19.742559  947325 logs.go:282] 0 containers: []
	W1213 10:38:19.742566  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:38:19.742573  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:38:19.742584  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:38:19.818214  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:38:19.818236  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:38:19.833741  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:38:19.833757  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:38:19.899700  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:38:19.891381   13404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:19.892146   13404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:19.893293   13404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:19.893921   13404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:19.895742   13404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:38:19.891381   13404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:19.892146   13404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:19.893293   13404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:19.893921   13404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:19.895742   13404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:38:19.899710  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:38:19.899731  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:38:19.969264  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:38:19.969284  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:38:22.501918  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:22.513303  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:38:22.513368  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:38:22.542006  947325 cri.go:89] found id: ""
	I1213 10:38:22.542020  947325 logs.go:282] 0 containers: []
	W1213 10:38:22.542028  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:38:22.542033  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:38:22.542109  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:38:22.572046  947325 cri.go:89] found id: ""
	I1213 10:38:22.572061  947325 logs.go:282] 0 containers: []
	W1213 10:38:22.572068  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:38:22.572073  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:38:22.572131  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:38:22.599640  947325 cri.go:89] found id: ""
	I1213 10:38:22.599654  947325 logs.go:282] 0 containers: []
	W1213 10:38:22.599660  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:38:22.599665  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:38:22.599728  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:38:22.628632  947325 cri.go:89] found id: ""
	I1213 10:38:22.628646  947325 logs.go:282] 0 containers: []
	W1213 10:38:22.628653  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:38:22.628658  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:38:22.628717  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:38:22.655032  947325 cri.go:89] found id: ""
	I1213 10:38:22.655046  947325 logs.go:282] 0 containers: []
	W1213 10:38:22.655053  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:38:22.655058  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:38:22.655119  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:38:22.682403  947325 cri.go:89] found id: ""
	I1213 10:38:22.682422  947325 logs.go:282] 0 containers: []
	W1213 10:38:22.682431  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:38:22.682436  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:38:22.682511  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:38:22.709263  947325 cri.go:89] found id: ""
	I1213 10:38:22.709277  947325 logs.go:282] 0 containers: []
	W1213 10:38:22.709286  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:38:22.709293  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:38:22.709307  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:38:22.748554  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:38:22.748573  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:38:22.820355  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:38:22.820376  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:38:22.836069  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:38:22.836100  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:38:22.902594  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:38:22.894546   13522 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:22.895165   13522 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:22.896717   13522 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:22.897250   13522 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:22.898679   13522 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:38:22.894546   13522 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:22.895165   13522 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:22.896717   13522 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:22.897250   13522 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:22.898679   13522 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:38:22.902605  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:38:22.902616  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:38:25.474313  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:25.484536  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:38:25.484600  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:38:25.512648  947325 cri.go:89] found id: ""
	I1213 10:38:25.512662  947325 logs.go:282] 0 containers: []
	W1213 10:38:25.512670  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:38:25.512675  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:38:25.512736  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:38:25.545720  947325 cri.go:89] found id: ""
	I1213 10:38:25.545739  947325 logs.go:282] 0 containers: []
	W1213 10:38:25.545746  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:38:25.545752  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:38:25.545821  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:38:25.572807  947325 cri.go:89] found id: ""
	I1213 10:38:25.572820  947325 logs.go:282] 0 containers: []
	W1213 10:38:25.572827  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:38:25.572832  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:38:25.572890  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:38:25.597850  947325 cri.go:89] found id: ""
	I1213 10:38:25.597864  947325 logs.go:282] 0 containers: []
	W1213 10:38:25.597871  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:38:25.597876  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:38:25.597939  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:38:25.622944  947325 cri.go:89] found id: ""
	I1213 10:38:25.622958  947325 logs.go:282] 0 containers: []
	W1213 10:38:25.622965  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:38:25.622971  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:38:25.623030  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:38:25.647255  947325 cri.go:89] found id: ""
	I1213 10:38:25.647268  947325 logs.go:282] 0 containers: []
	W1213 10:38:25.647276  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:38:25.647281  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:38:25.647339  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:38:25.672821  947325 cri.go:89] found id: ""
	I1213 10:38:25.672837  947325 logs.go:282] 0 containers: []
	W1213 10:38:25.672844  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:38:25.672864  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:38:25.672875  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:38:25.744377  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:38:25.744397  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:38:25.773682  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:38:25.773699  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:38:25.843372  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:38:25.843396  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:38:25.858420  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:38:25.858437  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:38:25.923733  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:38:25.915727   13633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:25.916379   13633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:25.917934   13633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:25.918499   13633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:25.919915   13633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:38:25.915727   13633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:25.916379   13633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:25.917934   13633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:25.918499   13633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:25.919915   13633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:38:28.424008  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:28.434425  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:38:28.434490  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:38:28.459479  947325 cri.go:89] found id: ""
	I1213 10:38:28.459493  947325 logs.go:282] 0 containers: []
	W1213 10:38:28.459501  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:38:28.459506  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:38:28.459569  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:38:28.488343  947325 cri.go:89] found id: ""
	I1213 10:38:28.488357  947325 logs.go:282] 0 containers: []
	W1213 10:38:28.488365  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:38:28.488370  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:38:28.488431  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:38:28.513634  947325 cri.go:89] found id: ""
	I1213 10:38:28.513649  947325 logs.go:282] 0 containers: []
	W1213 10:38:28.513656  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:38:28.513661  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:38:28.513719  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:38:28.540169  947325 cri.go:89] found id: ""
	I1213 10:38:28.540182  947325 logs.go:282] 0 containers: []
	W1213 10:38:28.540190  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:38:28.540195  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:38:28.540253  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:38:28.564331  947325 cri.go:89] found id: ""
	I1213 10:38:28.564344  947325 logs.go:282] 0 containers: []
	W1213 10:38:28.564351  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:38:28.564356  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:38:28.564415  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:38:28.592829  947325 cri.go:89] found id: ""
	I1213 10:38:28.592844  947325 logs.go:282] 0 containers: []
	W1213 10:38:28.592851  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:38:28.592856  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:38:28.592913  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:38:28.618020  947325 cri.go:89] found id: ""
	I1213 10:38:28.618035  947325 logs.go:282] 0 containers: []
	W1213 10:38:28.618044  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:38:28.618052  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:38:28.618063  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:38:28.685306  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:38:28.685326  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:38:28.713761  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:38:28.713779  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:38:28.794463  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:38:28.794484  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:38:28.809677  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:38:28.809696  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:38:28.870924  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:38:28.863257   13740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:28.863803   13740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:28.864955   13740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:28.865616   13740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:28.867097   13740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:38:28.863257   13740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:28.863803   13740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:28.864955   13740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:28.865616   13740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:28.867097   13740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:38:31.371199  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:31.381501  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:38:31.381583  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:38:31.408362  947325 cri.go:89] found id: ""
	I1213 10:38:31.408376  947325 logs.go:282] 0 containers: []
	W1213 10:38:31.408383  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:38:31.408388  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:38:31.408454  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:38:31.434743  947325 cri.go:89] found id: ""
	I1213 10:38:31.434758  947325 logs.go:282] 0 containers: []
	W1213 10:38:31.434766  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:38:31.434772  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:38:31.434831  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:38:31.467710  947325 cri.go:89] found id: ""
	I1213 10:38:31.467724  947325 logs.go:282] 0 containers: []
	W1213 10:38:31.467731  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:38:31.467736  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:38:31.467795  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:38:31.493177  947325 cri.go:89] found id: ""
	I1213 10:38:31.493191  947325 logs.go:282] 0 containers: []
	W1213 10:38:31.493198  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:38:31.493203  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:38:31.493263  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:38:31.517966  947325 cri.go:89] found id: ""
	I1213 10:38:31.517980  947325 logs.go:282] 0 containers: []
	W1213 10:38:31.517987  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:38:31.517992  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:38:31.518057  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:38:31.542186  947325 cri.go:89] found id: ""
	I1213 10:38:31.542201  947325 logs.go:282] 0 containers: []
	W1213 10:38:31.542208  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:38:31.542213  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:38:31.542270  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:38:31.567569  947325 cri.go:89] found id: ""
	I1213 10:38:31.567583  947325 logs.go:282] 0 containers: []
	W1213 10:38:31.567590  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:38:31.567598  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:38:31.567609  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:38:31.633128  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:38:31.633147  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:38:31.647898  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:38:31.647916  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:38:31.713585  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:38:31.704990   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:31.706015   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:31.707614   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:31.708200   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:31.709708   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:38:31.704990   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:31.706015   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:31.707614   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:31.708200   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:31.709708   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:38:31.713595  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:38:31.713606  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:38:31.784338  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:38:31.784357  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:38:34.315454  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:34.327061  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:38:34.327130  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:38:34.356795  947325 cri.go:89] found id: ""
	I1213 10:38:34.356809  947325 logs.go:282] 0 containers: []
	W1213 10:38:34.356817  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:38:34.356822  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:38:34.356892  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:38:34.384789  947325 cri.go:89] found id: ""
	I1213 10:38:34.384804  947325 logs.go:282] 0 containers: []
	W1213 10:38:34.384812  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:38:34.384817  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:38:34.384907  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:38:34.410778  947325 cri.go:89] found id: ""
	I1213 10:38:34.410791  947325 logs.go:282] 0 containers: []
	W1213 10:38:34.410799  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:38:34.410804  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:38:34.410861  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:38:34.440426  947325 cri.go:89] found id: ""
	I1213 10:38:34.440440  947325 logs.go:282] 0 containers: []
	W1213 10:38:34.440454  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:38:34.440459  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:38:34.440514  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:38:34.465148  947325 cri.go:89] found id: ""
	I1213 10:38:34.465162  947325 logs.go:282] 0 containers: []
	W1213 10:38:34.465170  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:38:34.465175  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:38:34.465236  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:38:34.491230  947325 cri.go:89] found id: ""
	I1213 10:38:34.491245  947325 logs.go:282] 0 containers: []
	W1213 10:38:34.491253  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:38:34.491259  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:38:34.491364  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:38:34.520190  947325 cri.go:89] found id: ""
	I1213 10:38:34.520205  947325 logs.go:282] 0 containers: []
	W1213 10:38:34.520213  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:38:34.520220  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:38:34.520235  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:38:34.552635  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:38:34.552652  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:38:34.617894  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:38:34.617914  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:38:34.632507  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:38:34.632528  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:38:34.697693  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:38:34.688967   13939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:34.689672   13939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:34.691242   13939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:34.691552   13939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:34.693083   13939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:38:34.688967   13939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:34.689672   13939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:34.691242   13939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:34.691552   13939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:34.693083   13939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:38:34.697704  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:38:34.697715  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:38:37.276776  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:37.287236  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:38:37.287306  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:38:37.314091  947325 cri.go:89] found id: ""
	I1213 10:38:37.314105  947325 logs.go:282] 0 containers: []
	W1213 10:38:37.314112  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:38:37.314118  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:38:37.314180  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:38:37.343079  947325 cri.go:89] found id: ""
	I1213 10:38:37.343092  947325 logs.go:282] 0 containers: []
	W1213 10:38:37.343099  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:38:37.343104  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:38:37.343162  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:38:37.371406  947325 cri.go:89] found id: ""
	I1213 10:38:37.371420  947325 logs.go:282] 0 containers: []
	W1213 10:38:37.371428  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:38:37.371432  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:38:37.371489  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:38:37.400383  947325 cri.go:89] found id: ""
	I1213 10:38:37.400398  947325 logs.go:282] 0 containers: []
	W1213 10:38:37.400405  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:38:37.400415  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:38:37.400473  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:38:37.432217  947325 cri.go:89] found id: ""
	I1213 10:38:37.432232  947325 logs.go:282] 0 containers: []
	W1213 10:38:37.432240  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:38:37.432245  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:38:37.432306  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:38:37.459687  947325 cri.go:89] found id: ""
	I1213 10:38:37.459701  947325 logs.go:282] 0 containers: []
	W1213 10:38:37.459708  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:38:37.459713  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:38:37.459771  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:38:37.491295  947325 cri.go:89] found id: ""
	I1213 10:38:37.491309  947325 logs.go:282] 0 containers: []
	W1213 10:38:37.491316  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:38:37.491324  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:38:37.491335  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:38:37.569044  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:38:37.569068  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:38:37.598399  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:38:37.598416  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:38:37.669854  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:38:37.669873  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:38:37.685001  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:38:37.685024  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:38:37.764039  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:38:37.754418   14047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:37.755520   14047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:37.757588   14047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:37.758501   14047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:37.759525   14047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:38:37.754418   14047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:37.755520   14047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:37.757588   14047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:37.758501   14047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:37.759525   14047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:38:40.265130  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:40.276597  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:38:40.276660  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:38:40.301799  947325 cri.go:89] found id: ""
	I1213 10:38:40.301815  947325 logs.go:282] 0 containers: []
	W1213 10:38:40.301822  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:38:40.301828  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:38:40.301884  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:38:40.328096  947325 cri.go:89] found id: ""
	I1213 10:38:40.328110  947325 logs.go:282] 0 containers: []
	W1213 10:38:40.328117  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:38:40.328122  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:38:40.328180  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:38:40.352505  947325 cri.go:89] found id: ""
	I1213 10:38:40.352520  947325 logs.go:282] 0 containers: []
	W1213 10:38:40.352527  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:38:40.352532  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:38:40.352592  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:38:40.381218  947325 cri.go:89] found id: ""
	I1213 10:38:40.381233  947325 logs.go:282] 0 containers: []
	W1213 10:38:40.381240  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:38:40.381245  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:38:40.381303  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:38:40.406747  947325 cri.go:89] found id: ""
	I1213 10:38:40.406761  947325 logs.go:282] 0 containers: []
	W1213 10:38:40.406769  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:38:40.406774  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:38:40.406836  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:38:40.432179  947325 cri.go:89] found id: ""
	I1213 10:38:40.432193  947325 logs.go:282] 0 containers: []
	W1213 10:38:40.432200  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:38:40.432230  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:38:40.432294  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:38:40.457241  947325 cri.go:89] found id: ""
	I1213 10:38:40.457256  947325 logs.go:282] 0 containers: []
	W1213 10:38:40.457263  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:38:40.457270  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:38:40.457281  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:38:40.485384  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:38:40.485400  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:38:40.553931  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:38:40.553950  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:38:40.568552  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:38:40.568568  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:38:40.631691  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:38:40.623997   14152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:40.624643   14152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:40.626097   14152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:40.626582   14152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:40.628021   14152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:38:40.623997   14152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:40.624643   14152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:40.626097   14152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:40.626582   14152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:40.628021   14152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:38:40.631701  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:38:40.631711  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:38:43.202405  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:43.212618  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:38:43.212681  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:38:43.237960  947325 cri.go:89] found id: ""
	I1213 10:38:43.237975  947325 logs.go:282] 0 containers: []
	W1213 10:38:43.237981  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:38:43.237986  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:38:43.238046  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:38:43.262400  947325 cri.go:89] found id: ""
	I1213 10:38:43.262415  947325 logs.go:282] 0 containers: []
	W1213 10:38:43.262422  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:38:43.262427  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:38:43.262485  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:38:43.287113  947325 cri.go:89] found id: ""
	I1213 10:38:43.287126  947325 logs.go:282] 0 containers: []
	W1213 10:38:43.287133  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:38:43.287138  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:38:43.287194  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:38:43.311437  947325 cri.go:89] found id: ""
	I1213 10:38:43.311451  947325 logs.go:282] 0 containers: []
	W1213 10:38:43.311459  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:38:43.311464  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:38:43.311520  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:38:43.338038  947325 cri.go:89] found id: ""
	I1213 10:38:43.338052  947325 logs.go:282] 0 containers: []
	W1213 10:38:43.338059  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:38:43.338066  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:38:43.338125  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:38:43.363248  947325 cri.go:89] found id: ""
	I1213 10:38:43.363262  947325 logs.go:282] 0 containers: []
	W1213 10:38:43.363269  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:38:43.363274  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:38:43.363331  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:38:43.388331  947325 cri.go:89] found id: ""
	I1213 10:38:43.388346  947325 logs.go:282] 0 containers: []
	W1213 10:38:43.388353  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:38:43.388361  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:38:43.388371  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:38:43.456040  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:38:43.448208   14240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:43.448885   14240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:43.450561   14240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:43.451211   14240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:43.452293   14240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:38:43.448208   14240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:43.448885   14240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:43.450561   14240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:43.451211   14240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:43.452293   14240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:38:43.456051  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:38:43.456062  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:38:43.529676  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:38:43.529697  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:38:43.557667  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:38:43.557683  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:38:43.626256  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:38:43.626276  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:38:46.141151  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:46.151629  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:38:46.151691  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:38:46.177078  947325 cri.go:89] found id: ""
	I1213 10:38:46.177092  947325 logs.go:282] 0 containers: []
	W1213 10:38:46.177099  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:38:46.177104  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:38:46.177163  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:38:46.203681  947325 cri.go:89] found id: ""
	I1213 10:38:46.203695  947325 logs.go:282] 0 containers: []
	W1213 10:38:46.203702  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:38:46.203707  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:38:46.203765  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:38:46.228801  947325 cri.go:89] found id: ""
	I1213 10:38:46.228815  947325 logs.go:282] 0 containers: []
	W1213 10:38:46.228823  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:38:46.228828  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:38:46.228892  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:38:46.254742  947325 cri.go:89] found id: ""
	I1213 10:38:46.254756  947325 logs.go:282] 0 containers: []
	W1213 10:38:46.254763  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:38:46.254768  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:38:46.254825  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:38:46.286504  947325 cri.go:89] found id: ""
	I1213 10:38:46.286522  947325 logs.go:282] 0 containers: []
	W1213 10:38:46.286529  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:38:46.286534  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:38:46.286596  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:38:46.311507  947325 cri.go:89] found id: ""
	I1213 10:38:46.311523  947325 logs.go:282] 0 containers: []
	W1213 10:38:46.311531  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:38:46.311536  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:38:46.311599  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:38:46.340455  947325 cri.go:89] found id: ""
	I1213 10:38:46.340469  947325 logs.go:282] 0 containers: []
	W1213 10:38:46.340477  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:38:46.340496  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:38:46.340508  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:38:46.410798  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:38:46.410817  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:38:46.425740  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:38:46.425758  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:38:46.488528  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:38:46.479589   14352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:46.480382   14352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:46.482285   14352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:46.482891   14352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:46.484595   14352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:38:46.479589   14352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:46.480382   14352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:46.482285   14352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:46.482891   14352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:46.484595   14352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:38:46.488537  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:38:46.488549  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:38:46.558649  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:38:46.558668  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:38:49.089125  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:49.099199  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:38:49.099261  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:38:49.128242  947325 cri.go:89] found id: ""
	I1213 10:38:49.128256  947325 logs.go:282] 0 containers: []
	W1213 10:38:49.128263  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:38:49.128268  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:38:49.128328  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:38:49.154103  947325 cri.go:89] found id: ""
	I1213 10:38:49.154117  947325 logs.go:282] 0 containers: []
	W1213 10:38:49.154124  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:38:49.154129  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:38:49.154189  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:38:49.178738  947325 cri.go:89] found id: ""
	I1213 10:38:49.178754  947325 logs.go:282] 0 containers: []
	W1213 10:38:49.178762  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:38:49.178767  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:38:49.178824  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:38:49.203209  947325 cri.go:89] found id: ""
	I1213 10:38:49.203223  947325 logs.go:282] 0 containers: []
	W1213 10:38:49.203230  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:38:49.203235  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:38:49.203290  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:38:49.228158  947325 cri.go:89] found id: ""
	I1213 10:38:49.228174  947325 logs.go:282] 0 containers: []
	W1213 10:38:49.228181  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:38:49.228186  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:38:49.228245  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:38:49.257410  947325 cri.go:89] found id: ""
	I1213 10:38:49.257425  947325 logs.go:282] 0 containers: []
	W1213 10:38:49.257432  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:38:49.257437  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:38:49.257503  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:38:49.284405  947325 cri.go:89] found id: ""
	I1213 10:38:49.284419  947325 logs.go:282] 0 containers: []
	W1213 10:38:49.284428  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:38:49.284436  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:38:49.284447  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:38:49.350814  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:38:49.350834  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:38:49.365897  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:38:49.365914  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:38:49.428434  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:38:49.419689   14458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:49.420411   14458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:49.422194   14458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:49.422785   14458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:49.424440   14458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:38:49.419689   14458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:49.420411   14458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:49.422194   14458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:49.422785   14458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:49.424440   14458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:38:49.428445  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:38:49.428455  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:38:49.497319  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:38:49.497338  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:38:52.026790  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:52.037493  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:38:52.037629  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:38:52.065928  947325 cri.go:89] found id: ""
	I1213 10:38:52.065942  947325 logs.go:282] 0 containers: []
	W1213 10:38:52.065959  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:38:52.065966  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:38:52.066030  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:38:52.093348  947325 cri.go:89] found id: ""
	I1213 10:38:52.093377  947325 logs.go:282] 0 containers: []
	W1213 10:38:52.093385  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:38:52.093391  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:38:52.093461  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:38:52.120408  947325 cri.go:89] found id: ""
	I1213 10:38:52.120438  947325 logs.go:282] 0 containers: []
	W1213 10:38:52.120446  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:38:52.120451  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:38:52.120520  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:38:52.151619  947325 cri.go:89] found id: ""
	I1213 10:38:52.151633  947325 logs.go:282] 0 containers: []
	W1213 10:38:52.151640  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:38:52.151645  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:38:52.151709  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:38:52.181293  947325 cri.go:89] found id: ""
	I1213 10:38:52.181307  947325 logs.go:282] 0 containers: []
	W1213 10:38:52.181314  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:38:52.181319  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:38:52.181381  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:38:52.207056  947325 cri.go:89] found id: ""
	I1213 10:38:52.207073  947325 logs.go:282] 0 containers: []
	W1213 10:38:52.207080  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:38:52.207085  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:38:52.207144  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:38:52.232482  947325 cri.go:89] found id: ""
	I1213 10:38:52.232495  947325 logs.go:282] 0 containers: []
	W1213 10:38:52.232503  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:38:52.232511  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:38:52.232523  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:38:52.298884  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:38:52.298908  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:38:52.314165  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:38:52.314184  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:38:52.379432  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:38:52.370728   14563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:52.371164   14563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:52.372944   14563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:52.373396   14563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:52.375062   14563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:38:52.370728   14563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:52.371164   14563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:52.372944   14563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:52.373396   14563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:52.375062   14563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:38:52.379442  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:38:52.379454  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:38:52.447720  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:38:52.447739  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:38:54.981781  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:54.994265  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:38:54.994331  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:38:55.034512  947325 cri.go:89] found id: ""
	I1213 10:38:55.034527  947325 logs.go:282] 0 containers: []
	W1213 10:38:55.034535  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:38:55.034541  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:38:55.034603  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:38:55.064371  947325 cri.go:89] found id: ""
	I1213 10:38:55.064385  947325 logs.go:282] 0 containers: []
	W1213 10:38:55.064393  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:38:55.064399  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:38:55.064464  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:38:55.094614  947325 cri.go:89] found id: ""
	I1213 10:38:55.094628  947325 logs.go:282] 0 containers: []
	W1213 10:38:55.094635  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:38:55.094640  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:38:55.094703  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:38:55.122445  947325 cri.go:89] found id: ""
	I1213 10:38:55.122469  947325 logs.go:282] 0 containers: []
	W1213 10:38:55.122476  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:38:55.122482  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:38:55.122565  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:38:55.149483  947325 cri.go:89] found id: ""
	I1213 10:38:55.149497  947325 logs.go:282] 0 containers: []
	W1213 10:38:55.149505  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:38:55.149510  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:38:55.149608  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:38:55.177190  947325 cri.go:89] found id: ""
	I1213 10:38:55.177204  947325 logs.go:282] 0 containers: []
	W1213 10:38:55.177211  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:38:55.177216  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:38:55.177276  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:38:55.205792  947325 cri.go:89] found id: ""
	I1213 10:38:55.205805  947325 logs.go:282] 0 containers: []
	W1213 10:38:55.205813  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:38:55.205820  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:38:55.205831  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:38:55.274521  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:38:55.274543  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:38:55.303850  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:38:55.303867  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:38:55.372053  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:38:55.372072  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:38:55.386741  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:38:55.386757  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:38:55.453760  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:38:55.443866   14678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:55.444485   14678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:55.446205   14678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:55.448348   14678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:55.448876   14678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:38:55.443866   14678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:55.444485   14678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:55.446205   14678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:55.448348   14678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:55.448876   14678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:38:57.954020  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:57.964050  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:38:57.964109  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:38:57.998468  947325 cri.go:89] found id: ""
	I1213 10:38:57.998484  947325 logs.go:282] 0 containers: []
	W1213 10:38:57.998492  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:38:57.998497  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:38:57.998564  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:38:58.035565  947325 cri.go:89] found id: ""
	I1213 10:38:58.035580  947325 logs.go:282] 0 containers: []
	W1213 10:38:58.035587  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:38:58.035592  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:38:58.035654  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:38:58.066882  947325 cri.go:89] found id: ""
	I1213 10:38:58.066903  947325 logs.go:282] 0 containers: []
	W1213 10:38:58.066912  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:38:58.066917  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:38:58.066978  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:38:58.092977  947325 cri.go:89] found id: ""
	I1213 10:38:58.093007  947325 logs.go:282] 0 containers: []
	W1213 10:38:58.093014  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:38:58.093019  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:38:58.093088  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:38:58.123222  947325 cri.go:89] found id: ""
	I1213 10:38:58.123235  947325 logs.go:282] 0 containers: []
	W1213 10:38:58.123243  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:38:58.123248  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:38:58.123311  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:38:58.148191  947325 cri.go:89] found id: ""
	I1213 10:38:58.148204  947325 logs.go:282] 0 containers: []
	W1213 10:38:58.148211  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:38:58.148226  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:38:58.148283  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:38:58.174245  947325 cri.go:89] found id: ""
	I1213 10:38:58.174259  947325 logs.go:282] 0 containers: []
	W1213 10:38:58.174266  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:38:58.174274  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:38:58.174286  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:38:58.238353  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:38:58.230226   14764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:58.230884   14764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:58.232487   14764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:58.232939   14764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:58.234404   14764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:38:58.230226   14764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:58.230884   14764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:58.232487   14764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:58.232939   14764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:58.234404   14764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:38:58.238363  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:38:58.238374  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:38:58.310390  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:38:58.310414  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:38:58.339218  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:38:58.339235  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:38:58.411033  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:38:58.411053  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:00.926322  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:00.937217  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:00.937279  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:00.963631  947325 cri.go:89] found id: ""
	I1213 10:39:00.963645  947325 logs.go:282] 0 containers: []
	W1213 10:39:00.963653  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:00.963658  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:39:00.963720  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:00.992312  947325 cri.go:89] found id: ""
	I1213 10:39:00.992327  947325 logs.go:282] 0 containers: []
	W1213 10:39:00.992334  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:39:00.992340  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:39:00.992402  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:01.019653  947325 cri.go:89] found id: ""
	I1213 10:39:01.019667  947325 logs.go:282] 0 containers: []
	W1213 10:39:01.019674  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:39:01.019679  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:01.019737  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:01.048197  947325 cri.go:89] found id: ""
	I1213 10:39:01.048211  947325 logs.go:282] 0 containers: []
	W1213 10:39:01.048218  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:01.048224  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:01.048278  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:01.077274  947325 cri.go:89] found id: ""
	I1213 10:39:01.077288  947325 logs.go:282] 0 containers: []
	W1213 10:39:01.077296  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:01.077301  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:01.077359  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:01.102210  947325 cri.go:89] found id: ""
	I1213 10:39:01.102225  947325 logs.go:282] 0 containers: []
	W1213 10:39:01.102232  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:01.102237  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:01.102296  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:01.127343  947325 cri.go:89] found id: ""
	I1213 10:39:01.127357  947325 logs.go:282] 0 containers: []
	W1213 10:39:01.127364  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:01.127372  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:01.127384  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:01.193045  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:01.184559   14862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:01.185631   14862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:01.186444   14862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:01.187426   14862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:01.187971   14862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:01.184559   14862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:01.185631   14862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:01.186444   14862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:01.187426   14862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:01.187971   14862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:01.193056  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:39:01.193066  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:39:01.263652  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:39:01.263672  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:01.300661  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:01.300679  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:01.369051  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:01.369070  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:03.885575  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:03.895834  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:03.895898  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:03.926313  947325 cri.go:89] found id: ""
	I1213 10:39:03.926327  947325 logs.go:282] 0 containers: []
	W1213 10:39:03.926335  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:03.926339  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:39:03.926396  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:03.954240  947325 cri.go:89] found id: ""
	I1213 10:39:03.954254  947325 logs.go:282] 0 containers: []
	W1213 10:39:03.954261  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:39:03.954266  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:39:03.954324  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:03.984134  947325 cri.go:89] found id: ""
	I1213 10:39:03.984148  947325 logs.go:282] 0 containers: []
	W1213 10:39:03.984154  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:39:03.984159  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:03.984224  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:04.016879  947325 cri.go:89] found id: ""
	I1213 10:39:04.016894  947325 logs.go:282] 0 containers: []
	W1213 10:39:04.016901  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:04.016906  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:04.016965  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:04.041176  947325 cri.go:89] found id: ""
	I1213 10:39:04.041190  947325 logs.go:282] 0 containers: []
	W1213 10:39:04.041203  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:04.041208  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:04.041267  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:04.066331  947325 cri.go:89] found id: ""
	I1213 10:39:04.066345  947325 logs.go:282] 0 containers: []
	W1213 10:39:04.066351  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:04.066357  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:04.066415  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:04.090857  947325 cri.go:89] found id: ""
	I1213 10:39:04.090886  947325 logs.go:282] 0 containers: []
	W1213 10:39:04.090895  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:04.090903  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:04.090917  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:04.156570  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:04.156590  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:04.171387  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:04.171404  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:04.240263  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:04.226379   14974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:04.227108   14974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:04.228895   14974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:04.229425   14974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:04.230990   14974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:04.226379   14974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:04.227108   14974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:04.228895   14974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:04.229425   14974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:04.230990   14974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:04.240273  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:39:04.240285  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:39:04.319651  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:39:04.319672  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:06.852882  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:06.864121  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:06.864186  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:06.890733  947325 cri.go:89] found id: ""
	I1213 10:39:06.890748  947325 logs.go:282] 0 containers: []
	W1213 10:39:06.890756  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:06.890761  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:39:06.890819  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:06.917207  947325 cri.go:89] found id: ""
	I1213 10:39:06.917222  947325 logs.go:282] 0 containers: []
	W1213 10:39:06.917228  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:39:06.917234  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:39:06.917291  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:06.943186  947325 cri.go:89] found id: ""
	I1213 10:39:06.943201  947325 logs.go:282] 0 containers: []
	W1213 10:39:06.943208  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:39:06.943213  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:06.943278  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:06.973557  947325 cri.go:89] found id: ""
	I1213 10:39:06.973571  947325 logs.go:282] 0 containers: []
	W1213 10:39:06.973579  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:06.973584  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:06.973641  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:07.004748  947325 cri.go:89] found id: ""
	I1213 10:39:07.004770  947325 logs.go:282] 0 containers: []
	W1213 10:39:07.004778  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:07.004783  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:07.004851  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:07.030997  947325 cri.go:89] found id: ""
	I1213 10:39:07.031011  947325 logs.go:282] 0 containers: []
	W1213 10:39:07.031019  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:07.031024  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:07.031080  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:07.055983  947325 cri.go:89] found id: ""
	I1213 10:39:07.055997  947325 logs.go:282] 0 containers: []
	W1213 10:39:07.056004  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:07.056012  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:39:07.056024  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:07.084902  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:07.084919  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:07.153213  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:07.153232  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:07.168429  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:07.168446  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:07.232563  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:07.223603   15088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:07.224430   15088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:07.226089   15088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:07.226414   15088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:07.227903   15088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:07.223603   15088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:07.224430   15088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:07.226089   15088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:07.226414   15088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:07.227903   15088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:07.232586  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:39:07.232598  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:39:09.804561  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:09.814452  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:09.814514  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:09.840080  947325 cri.go:89] found id: ""
	I1213 10:39:09.840093  947325 logs.go:282] 0 containers: []
	W1213 10:39:09.840101  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:09.840106  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:39:09.840170  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:09.864603  947325 cri.go:89] found id: ""
	I1213 10:39:09.864617  947325 logs.go:282] 0 containers: []
	W1213 10:39:09.864625  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:39:09.864630  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:39:09.864697  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:09.889079  947325 cri.go:89] found id: ""
	I1213 10:39:09.889093  947325 logs.go:282] 0 containers: []
	W1213 10:39:09.889101  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:39:09.889106  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:09.889162  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:09.915869  947325 cri.go:89] found id: ""
	I1213 10:39:09.915883  947325 logs.go:282] 0 containers: []
	W1213 10:39:09.915890  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:09.915895  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:09.915954  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:09.945590  947325 cri.go:89] found id: ""
	I1213 10:39:09.945603  947325 logs.go:282] 0 containers: []
	W1213 10:39:09.945610  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:09.945618  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:09.945678  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:09.971712  947325 cri.go:89] found id: ""
	I1213 10:39:09.971725  947325 logs.go:282] 0 containers: []
	W1213 10:39:09.971732  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:09.971737  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:09.971798  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:10.003581  947325 cri.go:89] found id: ""
	I1213 10:39:10.003600  947325 logs.go:282] 0 containers: []
	W1213 10:39:10.003608  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:10.003618  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:39:10.003633  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:39:10.077821  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:39:10.077842  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:10.108375  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:10.108392  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:10.178400  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:10.178420  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:10.193608  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:10.193647  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:10.270772  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:10.262269   15196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:10.263276   15196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:10.265019   15196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:10.265328   15196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:10.266816   15196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:10.262269   15196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:10.263276   15196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:10.265019   15196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:10.265328   15196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:10.266816   15196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:12.771904  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:12.782049  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:12.782110  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:12.806673  947325 cri.go:89] found id: ""
	I1213 10:39:12.806687  947325 logs.go:282] 0 containers: []
	W1213 10:39:12.806695  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:12.806700  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:39:12.806757  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:12.835814  947325 cri.go:89] found id: ""
	I1213 10:39:12.835829  947325 logs.go:282] 0 containers: []
	W1213 10:39:12.835836  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:39:12.835841  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:39:12.835898  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:12.861712  947325 cri.go:89] found id: ""
	I1213 10:39:12.861727  947325 logs.go:282] 0 containers: []
	W1213 10:39:12.861734  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:39:12.861740  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:12.861804  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:12.886652  947325 cri.go:89] found id: ""
	I1213 10:39:12.886666  947325 logs.go:282] 0 containers: []
	W1213 10:39:12.886673  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:12.886678  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:12.886736  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:12.916010  947325 cri.go:89] found id: ""
	I1213 10:39:12.916025  947325 logs.go:282] 0 containers: []
	W1213 10:39:12.916032  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:12.916037  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:12.916100  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:12.946655  947325 cri.go:89] found id: ""
	I1213 10:39:12.946672  947325 logs.go:282] 0 containers: []
	W1213 10:39:12.946679  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:12.946684  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:12.946748  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:12.976684  947325 cri.go:89] found id: ""
	I1213 10:39:12.976698  947325 logs.go:282] 0 containers: []
	W1213 10:39:12.976705  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:12.976713  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:12.976726  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:13.043449  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:13.043472  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:13.059281  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:13.059299  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:13.122969  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:13.114879   15289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:13.115451   15289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:13.117021   15289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:13.117507   15289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:13.119078   15289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:13.114879   15289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:13.115451   15289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:13.117021   15289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:13.117507   15289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:13.119078   15289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:13.122981  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:39:13.122991  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:39:13.193301  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:39:13.193322  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:15.728135  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:15.739049  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:15.739110  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:15.764321  947325 cri.go:89] found id: ""
	I1213 10:39:15.764335  947325 logs.go:282] 0 containers: []
	W1213 10:39:15.764342  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:15.764348  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:39:15.764410  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:15.794053  947325 cri.go:89] found id: ""
	I1213 10:39:15.794068  947325 logs.go:282] 0 containers: []
	W1213 10:39:15.794077  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:39:15.794083  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:39:15.794138  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:15.819708  947325 cri.go:89] found id: ""
	I1213 10:39:15.819721  947325 logs.go:282] 0 containers: []
	W1213 10:39:15.819729  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:39:15.819734  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:15.819793  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:15.850534  947325 cri.go:89] found id: ""
	I1213 10:39:15.850548  947325 logs.go:282] 0 containers: []
	W1213 10:39:15.850556  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:15.850561  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:15.850618  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:15.879609  947325 cri.go:89] found id: ""
	I1213 10:39:15.879623  947325 logs.go:282] 0 containers: []
	W1213 10:39:15.879631  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:15.879636  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:15.879700  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:15.908873  947325 cri.go:89] found id: ""
	I1213 10:39:15.908887  947325 logs.go:282] 0 containers: []
	W1213 10:39:15.908895  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:15.908901  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:15.908967  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:15.936537  947325 cri.go:89] found id: ""
	I1213 10:39:15.936552  947325 logs.go:282] 0 containers: []
	W1213 10:39:15.936559  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:15.936567  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:39:15.936580  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:39:16.005668  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:39:16.005690  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:16.036804  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:16.036822  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:16.105762  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:16.105780  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:16.121830  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:16.121849  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:16.189324  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:16.180755   15407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:16.181397   15407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:16.183115   15407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:16.183776   15407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:16.185271   15407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:16.180755   15407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:16.181397   15407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:16.183115   15407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:16.183776   15407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:16.185271   15407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:18.689610  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:18.699729  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:18.699788  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:18.725083  947325 cri.go:89] found id: ""
	I1213 10:39:18.725097  947325 logs.go:282] 0 containers: []
	W1213 10:39:18.725105  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:18.725110  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:39:18.725165  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:18.751300  947325 cri.go:89] found id: ""
	I1213 10:39:18.751315  947325 logs.go:282] 0 containers: []
	W1213 10:39:18.751327  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:39:18.751333  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:39:18.751390  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:18.776458  947325 cri.go:89] found id: ""
	I1213 10:39:18.776473  947325 logs.go:282] 0 containers: []
	W1213 10:39:18.776480  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:39:18.776485  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:18.776543  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:18.801403  947325 cri.go:89] found id: ""
	I1213 10:39:18.801416  947325 logs.go:282] 0 containers: []
	W1213 10:39:18.801423  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:18.801428  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:18.801488  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:18.828035  947325 cri.go:89] found id: ""
	I1213 10:39:18.828053  947325 logs.go:282] 0 containers: []
	W1213 10:39:18.828060  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:18.828065  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:18.828122  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:18.852563  947325 cri.go:89] found id: ""
	I1213 10:39:18.852577  947325 logs.go:282] 0 containers: []
	W1213 10:39:18.852583  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:18.852589  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:18.852647  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:18.879882  947325 cri.go:89] found id: ""
	I1213 10:39:18.879897  947325 logs.go:282] 0 containers: []
	W1213 10:39:18.879904  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:18.879912  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:39:18.879922  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:18.913762  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:18.913788  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:18.978817  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:18.978840  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:18.994917  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:18.994936  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:19.062190  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:19.054243   15510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:19.054818   15510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:19.056322   15510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:19.056831   15510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:19.058280   15510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:19.054243   15510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:19.054818   15510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:19.056322   15510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:19.056831   15510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:19.058280   15510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:19.062201  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:39:19.062213  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:39:21.629331  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:21.639522  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:21.639593  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:21.664074  947325 cri.go:89] found id: ""
	I1213 10:39:21.664089  947325 logs.go:282] 0 containers: []
	W1213 10:39:21.664097  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:21.664102  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:39:21.664164  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:21.689123  947325 cri.go:89] found id: ""
	I1213 10:39:21.689136  947325 logs.go:282] 0 containers: []
	W1213 10:39:21.689144  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:39:21.689149  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:39:21.689206  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:21.713736  947325 cri.go:89] found id: ""
	I1213 10:39:21.713750  947325 logs.go:282] 0 containers: []
	W1213 10:39:21.713758  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:39:21.713762  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:21.713817  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:21.741978  947325 cri.go:89] found id: ""
	I1213 10:39:21.741991  947325 logs.go:282] 0 containers: []
	W1213 10:39:21.741999  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:21.742004  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:21.742063  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:21.767443  947325 cri.go:89] found id: ""
	I1213 10:39:21.767458  947325 logs.go:282] 0 containers: []
	W1213 10:39:21.767464  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:21.767469  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:21.767526  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:21.792419  947325 cri.go:89] found id: ""
	I1213 10:39:21.792434  947325 logs.go:282] 0 containers: []
	W1213 10:39:21.792457  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:21.792463  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:21.792529  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:21.821837  947325 cri.go:89] found id: ""
	I1213 10:39:21.821851  947325 logs.go:282] 0 containers: []
	W1213 10:39:21.821859  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:21.821867  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:21.821878  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:21.836299  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:21.836315  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:21.902625  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:21.894040   15602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:21.894485   15602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:21.896277   15602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:21.897017   15602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:21.898534   15602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:21.894040   15602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:21.894485   15602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:21.896277   15602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:21.897017   15602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:21.898534   15602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:21.902635  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:39:21.902646  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:39:21.971184  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:39:21.971204  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:22.003828  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:22.003847  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:24.576083  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:24.587706  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:24.587784  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:24.613620  947325 cri.go:89] found id: ""
	I1213 10:39:24.613635  947325 logs.go:282] 0 containers: []
	W1213 10:39:24.613643  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:24.613648  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:39:24.613706  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:24.639792  947325 cri.go:89] found id: ""
	I1213 10:39:24.639807  947325 logs.go:282] 0 containers: []
	W1213 10:39:24.639814  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:39:24.639820  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:39:24.639897  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:24.664551  947325 cri.go:89] found id: ""
	I1213 10:39:24.664566  947325 logs.go:282] 0 containers: []
	W1213 10:39:24.664573  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:39:24.664578  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:24.664638  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:24.689748  947325 cri.go:89] found id: ""
	I1213 10:39:24.689762  947325 logs.go:282] 0 containers: []
	W1213 10:39:24.689769  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:24.689774  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:24.689831  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:24.718617  947325 cri.go:89] found id: ""
	I1213 10:39:24.718632  947325 logs.go:282] 0 containers: []
	W1213 10:39:24.718639  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:24.718645  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:24.718702  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:24.748026  947325 cri.go:89] found id: ""
	I1213 10:39:24.748040  947325 logs.go:282] 0 containers: []
	W1213 10:39:24.748047  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:24.748052  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:24.748117  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:24.774049  947325 cri.go:89] found id: ""
	I1213 10:39:24.774063  947325 logs.go:282] 0 containers: []
	W1213 10:39:24.774070  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:24.774084  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:24.774095  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:24.840008  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:24.840029  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:24.855570  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:24.855587  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:24.924254  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:24.915904   15706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:24.916383   15706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:24.918059   15706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:24.918622   15706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:24.920297   15706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:24.915904   15706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:24.916383   15706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:24.918059   15706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:24.918622   15706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:24.920297   15706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:24.924266  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:39:24.924276  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:39:24.993620  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:39:24.993639  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:27.529665  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:27.539536  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:27.539597  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:27.564505  947325 cri.go:89] found id: ""
	I1213 10:39:27.564519  947325 logs.go:282] 0 containers: []
	W1213 10:39:27.564526  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:27.564531  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:39:27.564591  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:27.590383  947325 cri.go:89] found id: ""
	I1213 10:39:27.590397  947325 logs.go:282] 0 containers: []
	W1213 10:39:27.590405  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:39:27.590410  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:39:27.590474  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:27.615895  947325 cri.go:89] found id: ""
	I1213 10:39:27.615909  947325 logs.go:282] 0 containers: []
	W1213 10:39:27.615916  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:39:27.615921  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:27.615979  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:27.647656  947325 cri.go:89] found id: ""
	I1213 10:39:27.647670  947325 logs.go:282] 0 containers: []
	W1213 10:39:27.647678  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:27.647683  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:27.647741  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:27.673365  947325 cri.go:89] found id: ""
	I1213 10:39:27.673379  947325 logs.go:282] 0 containers: []
	W1213 10:39:27.673385  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:27.673390  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:27.673448  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:27.698006  947325 cri.go:89] found id: ""
	I1213 10:39:27.698020  947325 logs.go:282] 0 containers: []
	W1213 10:39:27.698028  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:27.698033  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:27.698096  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:27.722664  947325 cri.go:89] found id: ""
	I1213 10:39:27.722688  947325 logs.go:282] 0 containers: []
	W1213 10:39:27.722695  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:27.722702  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:27.722713  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:27.793605  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:27.793629  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:27.808404  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:27.808420  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:27.875877  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:27.866856   15813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:27.867426   15813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:27.869149   15813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:27.869660   15813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:27.871392   15813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:27.866856   15813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:27.867426   15813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:27.869149   15813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:27.869660   15813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:27.871392   15813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:27.875886  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:39:27.875898  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:39:27.944703  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:39:27.944723  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:30.475788  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:30.486929  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:30.486993  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:30.523816  947325 cri.go:89] found id: ""
	I1213 10:39:30.523830  947325 logs.go:282] 0 containers: []
	W1213 10:39:30.523837  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:30.523843  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:39:30.523899  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:30.556573  947325 cri.go:89] found id: ""
	I1213 10:39:30.556586  947325 logs.go:282] 0 containers: []
	W1213 10:39:30.556593  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:39:30.556598  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:39:30.556666  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:30.581886  947325 cri.go:89] found id: ""
	I1213 10:39:30.581900  947325 logs.go:282] 0 containers: []
	W1213 10:39:30.581907  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:39:30.581912  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:30.581972  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:30.611853  947325 cri.go:89] found id: ""
	I1213 10:39:30.611878  947325 logs.go:282] 0 containers: []
	W1213 10:39:30.611886  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:30.611891  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:30.611959  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:30.636125  947325 cri.go:89] found id: ""
	I1213 10:39:30.636140  947325 logs.go:282] 0 containers: []
	W1213 10:39:30.636147  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:30.636152  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:30.636213  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:30.661404  947325 cri.go:89] found id: ""
	I1213 10:39:30.661418  947325 logs.go:282] 0 containers: []
	W1213 10:39:30.661425  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:30.661430  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:30.661490  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:30.686369  947325 cri.go:89] found id: ""
	I1213 10:39:30.686382  947325 logs.go:282] 0 containers: []
	W1213 10:39:30.686390  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:30.686397  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:30.686408  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:30.752100  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:30.752120  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:30.766471  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:30.766487  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:30.831347  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:30.823244   15917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:30.823892   15917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:30.825523   15917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:30.826100   15917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:30.827547   15917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:30.823244   15917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:30.823892   15917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:30.825523   15917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:30.826100   15917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:30.827547   15917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:30.831356  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:39:30.831367  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:39:30.899699  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:39:30.899718  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:33.428636  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:33.438752  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:33.438815  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:33.467201  947325 cri.go:89] found id: ""
	I1213 10:39:33.467215  947325 logs.go:282] 0 containers: []
	W1213 10:39:33.467222  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:33.467227  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:39:33.467285  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:33.496554  947325 cri.go:89] found id: ""
	I1213 10:39:33.496570  947325 logs.go:282] 0 containers: []
	W1213 10:39:33.496577  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:39:33.496582  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:39:33.496650  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:33.523431  947325 cri.go:89] found id: ""
	I1213 10:39:33.523446  947325 logs.go:282] 0 containers: []
	W1213 10:39:33.523453  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:39:33.523457  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:33.523517  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:33.559332  947325 cri.go:89] found id: ""
	I1213 10:39:33.559346  947325 logs.go:282] 0 containers: []
	W1213 10:39:33.559353  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:33.559358  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:33.559413  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:33.587632  947325 cri.go:89] found id: ""
	I1213 10:39:33.587645  947325 logs.go:282] 0 containers: []
	W1213 10:39:33.587653  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:33.587658  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:33.587714  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:33.612223  947325 cri.go:89] found id: ""
	I1213 10:39:33.612237  947325 logs.go:282] 0 containers: []
	W1213 10:39:33.612266  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:33.612271  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:33.612339  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:33.636321  947325 cri.go:89] found id: ""
	I1213 10:39:33.636344  947325 logs.go:282] 0 containers: []
	W1213 10:39:33.636351  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:33.636359  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:33.636373  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:33.650977  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:33.650993  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:33.710121  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:33.702522   16018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:33.703286   16018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:33.704577   16018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:33.705062   16018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:33.706497   16018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:33.702522   16018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:33.703286   16018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:33.704577   16018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:33.705062   16018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:33.706497   16018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:33.710132  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:39:33.710143  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:39:33.781081  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:39:33.781101  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:33.810866  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:33.810882  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:36.380753  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:36.390598  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:36.390659  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:36.416068  947325 cri.go:89] found id: ""
	I1213 10:39:36.416083  947325 logs.go:282] 0 containers: []
	W1213 10:39:36.416090  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:36.416097  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:39:36.416156  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:36.443939  947325 cri.go:89] found id: ""
	I1213 10:39:36.443954  947325 logs.go:282] 0 containers: []
	W1213 10:39:36.443968  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:39:36.443973  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:39:36.444031  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:36.468690  947325 cri.go:89] found id: ""
	I1213 10:39:36.468704  947325 logs.go:282] 0 containers: []
	W1213 10:39:36.468711  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:39:36.468716  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:36.468772  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:36.498941  947325 cri.go:89] found id: ""
	I1213 10:39:36.498955  947325 logs.go:282] 0 containers: []
	W1213 10:39:36.498962  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:36.498967  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:36.499033  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:36.538080  947325 cri.go:89] found id: ""
	I1213 10:39:36.538103  947325 logs.go:282] 0 containers: []
	W1213 10:39:36.538111  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:36.538116  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:36.538179  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:36.563132  947325 cri.go:89] found id: ""
	I1213 10:39:36.563147  947325 logs.go:282] 0 containers: []
	W1213 10:39:36.563154  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:36.563160  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:36.563217  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:36.588756  947325 cri.go:89] found id: ""
	I1213 10:39:36.588780  947325 logs.go:282] 0 containers: []
	W1213 10:39:36.588789  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:36.588797  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:36.588812  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:36.653330  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:36.653350  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:36.670404  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:36.670421  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:36.742327  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:36.732861   16126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:36.733828   16126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:36.734900   16126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:36.736542   16126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:36.737273   16126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:36.732861   16126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:36.733828   16126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:36.734900   16126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:36.736542   16126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:36.737273   16126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:36.742339  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:39:36.742350  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:39:36.811143  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:39:36.811163  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:39.339643  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:39.349836  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:39.349897  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:39.375161  947325 cri.go:89] found id: ""
	I1213 10:39:39.375175  947325 logs.go:282] 0 containers: []
	W1213 10:39:39.375194  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:39.375200  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:39:39.375262  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:39.400364  947325 cri.go:89] found id: ""
	I1213 10:39:39.400393  947325 logs.go:282] 0 containers: []
	W1213 10:39:39.400402  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:39:39.400407  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:39:39.400473  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:39.427167  947325 cri.go:89] found id: ""
	I1213 10:39:39.427182  947325 logs.go:282] 0 containers: []
	W1213 10:39:39.427189  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:39:39.427195  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:39.427270  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:39.455933  947325 cri.go:89] found id: ""
	I1213 10:39:39.455960  947325 logs.go:282] 0 containers: []
	W1213 10:39:39.455967  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:39.455973  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:39.456041  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:39.489827  947325 cri.go:89] found id: ""
	I1213 10:39:39.489840  947325 logs.go:282] 0 containers: []
	W1213 10:39:39.489847  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:39.489852  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:39.489920  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:39.529776  947325 cri.go:89] found id: ""
	I1213 10:39:39.529790  947325 logs.go:282] 0 containers: []
	W1213 10:39:39.529797  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:39.529814  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:39.529890  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:39.558531  947325 cri.go:89] found id: ""
	I1213 10:39:39.558545  947325 logs.go:282] 0 containers: []
	W1213 10:39:39.558552  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:39.558560  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:39.558571  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:39.625366  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:39.625384  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:39.640509  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:39.640525  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:39.706928  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:39.697596   16228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:39.698528   16228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:39.700141   16228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:39.700637   16228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:39.702492   16228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:39.697596   16228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:39.698528   16228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:39.700141   16228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:39.700637   16228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:39.702492   16228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:39.706940  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:39:39.706952  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:39:39.779211  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:39:39.779231  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:42.308782  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:42.319639  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:42.319702  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:42.346935  947325 cri.go:89] found id: ""
	I1213 10:39:42.346959  947325 logs.go:282] 0 containers: []
	W1213 10:39:42.346970  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:42.346977  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:39:42.347038  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:42.378296  947325 cri.go:89] found id: ""
	I1213 10:39:42.378310  947325 logs.go:282] 0 containers: []
	W1213 10:39:42.378316  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:39:42.378321  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:39:42.378381  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:42.403824  947325 cri.go:89] found id: ""
	I1213 10:39:42.403839  947325 logs.go:282] 0 containers: []
	W1213 10:39:42.403845  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:39:42.403850  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:42.403919  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:42.429874  947325 cri.go:89] found id: ""
	I1213 10:39:42.429890  947325 logs.go:282] 0 containers: []
	W1213 10:39:42.429898  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:42.429905  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:42.429978  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:42.457189  947325 cri.go:89] found id: ""
	I1213 10:39:42.457203  947325 logs.go:282] 0 containers: []
	W1213 10:39:42.457211  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:42.457216  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:42.457277  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:42.485373  947325 cri.go:89] found id: ""
	I1213 10:39:42.485389  947325 logs.go:282] 0 containers: []
	W1213 10:39:42.485400  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:42.485429  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:42.485500  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:42.518706  947325 cri.go:89] found id: ""
	I1213 10:39:42.518720  947325 logs.go:282] 0 containers: []
	W1213 10:39:42.518728  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:42.518735  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:42.518746  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:42.534645  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:42.534662  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:42.606481  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:42.598265   16332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:42.599171   16332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:42.600937   16332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:42.601258   16332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:42.602752   16332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:42.598265   16332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:42.599171   16332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:42.600937   16332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:42.601258   16332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:42.602752   16332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:42.606491  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:39:42.606501  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:39:42.673511  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:39:42.673532  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:42.702426  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:42.702443  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:45.267475  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:45.280530  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:45.280751  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:45.313332  947325 cri.go:89] found id: ""
	I1213 10:39:45.313346  947325 logs.go:282] 0 containers: []
	W1213 10:39:45.313354  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:45.313359  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:39:45.313427  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:45.342213  947325 cri.go:89] found id: ""
	I1213 10:39:45.342227  947325 logs.go:282] 0 containers: []
	W1213 10:39:45.342234  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:39:45.342239  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:39:45.342297  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:45.371108  947325 cri.go:89] found id: ""
	I1213 10:39:45.371123  947325 logs.go:282] 0 containers: []
	W1213 10:39:45.371130  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:39:45.371137  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:45.371197  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:45.400706  947325 cri.go:89] found id: ""
	I1213 10:39:45.400720  947325 logs.go:282] 0 containers: []
	W1213 10:39:45.400728  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:45.400735  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:45.400805  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:45.428233  947325 cri.go:89] found id: ""
	I1213 10:39:45.428258  947325 logs.go:282] 0 containers: []
	W1213 10:39:45.428266  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:45.428271  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:45.428341  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:45.458995  947325 cri.go:89] found id: ""
	I1213 10:39:45.459010  947325 logs.go:282] 0 containers: []
	W1213 10:39:45.459017  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:45.459023  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:45.459081  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:45.494206  947325 cri.go:89] found id: ""
	I1213 10:39:45.494220  947325 logs.go:282] 0 containers: []
	W1213 10:39:45.494227  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:45.494235  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:39:45.494246  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:39:45.575280  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:39:45.575299  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:45.605803  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:45.605820  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:45.676085  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:45.676104  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:45.691072  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:45.691091  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:45.756808  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:45.747515   16453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:45.748188   16453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:45.750879   16453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:45.751438   16453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:45.752940   16453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:45.747515   16453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:45.748188   16453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:45.750879   16453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:45.751438   16453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:45.752940   16453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:48.257078  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:48.266893  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:48.266954  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:48.292251  947325 cri.go:89] found id: ""
	I1213 10:39:48.292265  947325 logs.go:282] 0 containers: []
	W1213 10:39:48.292272  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:48.292288  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:39:48.292345  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:48.318109  947325 cri.go:89] found id: ""
	I1213 10:39:48.318134  947325 logs.go:282] 0 containers: []
	W1213 10:39:48.318142  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:39:48.318147  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:39:48.318207  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:48.344874  947325 cri.go:89] found id: ""
	I1213 10:39:48.344888  947325 logs.go:282] 0 containers: []
	W1213 10:39:48.344896  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:39:48.344901  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:48.344966  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:48.372878  947325 cri.go:89] found id: ""
	I1213 10:39:48.372893  947325 logs.go:282] 0 containers: []
	W1213 10:39:48.372900  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:48.372906  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:48.372967  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:48.399505  947325 cri.go:89] found id: ""
	I1213 10:39:48.399517  947325 logs.go:282] 0 containers: []
	W1213 10:39:48.399525  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:48.399530  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:48.399591  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:48.426096  947325 cri.go:89] found id: ""
	I1213 10:39:48.426110  947325 logs.go:282] 0 containers: []
	W1213 10:39:48.426117  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:48.426123  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:48.426182  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:48.452372  947325 cri.go:89] found id: ""
	I1213 10:39:48.452387  947325 logs.go:282] 0 containers: []
	W1213 10:39:48.452394  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:48.452402  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:39:48.452413  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:39:48.535530  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:39:48.535558  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:48.565498  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:48.565516  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:48.638609  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:48.638630  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:48.653725  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:48.653743  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:48.724088  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:48.715285   16557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:48.715997   16557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:48.717752   16557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:48.718374   16557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:48.719911   16557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:48.715285   16557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:48.715997   16557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:48.717752   16557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:48.718374   16557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:48.719911   16557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:51.224632  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:51.234995  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:51.235060  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:51.260920  947325 cri.go:89] found id: ""
	I1213 10:39:51.260934  947325 logs.go:282] 0 containers: []
	W1213 10:39:51.260941  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:51.260946  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:39:51.261010  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:51.288308  947325 cri.go:89] found id: ""
	I1213 10:39:51.288323  947325 logs.go:282] 0 containers: []
	W1213 10:39:51.288330  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:39:51.288335  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:39:51.288395  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:51.313237  947325 cri.go:89] found id: ""
	I1213 10:39:51.313251  947325 logs.go:282] 0 containers: []
	W1213 10:39:51.313258  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:39:51.313263  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:51.313322  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:51.340832  947325 cri.go:89] found id: ""
	I1213 10:39:51.340845  947325 logs.go:282] 0 containers: []
	W1213 10:39:51.340852  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:51.340857  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:51.340913  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:51.367975  947325 cri.go:89] found id: ""
	I1213 10:39:51.367989  947325 logs.go:282] 0 containers: []
	W1213 10:39:51.367996  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:51.368000  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:51.368059  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:51.393715  947325 cri.go:89] found id: ""
	I1213 10:39:51.393728  947325 logs.go:282] 0 containers: []
	W1213 10:39:51.393736  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:51.393741  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:51.393803  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:51.422317  947325 cri.go:89] found id: ""
	I1213 10:39:51.422331  947325 logs.go:282] 0 containers: []
	W1213 10:39:51.422338  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:51.422345  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:39:51.422356  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:39:51.492559  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:39:51.492577  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:51.531769  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:51.531786  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:51.599294  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:51.599316  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:51.615318  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:51.615334  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:51.678990  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:51.669927   16661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:51.670629   16661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:51.672315   16661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:51.672978   16661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:51.674480   16661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:51.669927   16661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:51.670629   16661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:51.672315   16661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:51.672978   16661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:51.674480   16661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:54.180647  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:54.190751  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:54.190817  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:54.216105  947325 cri.go:89] found id: ""
	I1213 10:39:54.216119  947325 logs.go:282] 0 containers: []
	W1213 10:39:54.216126  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:54.216131  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:39:54.216188  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:54.245934  947325 cri.go:89] found id: ""
	I1213 10:39:54.245948  947325 logs.go:282] 0 containers: []
	W1213 10:39:54.245955  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:39:54.245960  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:39:54.246019  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:54.272786  947325 cri.go:89] found id: ""
	I1213 10:39:54.272800  947325 logs.go:282] 0 containers: []
	W1213 10:39:54.272807  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:39:54.272812  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:54.272871  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:54.298724  947325 cri.go:89] found id: ""
	I1213 10:39:54.298738  947325 logs.go:282] 0 containers: []
	W1213 10:39:54.298745  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:54.298750  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:54.298814  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:54.324500  947325 cri.go:89] found id: ""
	I1213 10:39:54.324514  947325 logs.go:282] 0 containers: []
	W1213 10:39:54.324522  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:54.324533  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:54.324647  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:54.351350  947325 cri.go:89] found id: ""
	I1213 10:39:54.351364  947325 logs.go:282] 0 containers: []
	W1213 10:39:54.351372  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:54.351377  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:54.351439  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:54.376698  947325 cri.go:89] found id: ""
	I1213 10:39:54.376712  947325 logs.go:282] 0 containers: []
	W1213 10:39:54.376720  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:54.376729  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:39:54.376740  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:54.408737  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:54.408753  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:54.475785  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:54.475805  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:54.498578  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:54.498595  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:54.571508  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:54.562841   16763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:54.563554   16763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:54.565208   16763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:54.565888   16763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:54.567536   16763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:54.562841   16763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:54.563554   16763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:54.565208   16763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:54.565888   16763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:54.567536   16763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:54.571518  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:39:54.571529  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:39:57.141570  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:57.151660  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:57.151725  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:57.177208  947325 cri.go:89] found id: ""
	I1213 10:39:57.177222  947325 logs.go:282] 0 containers: []
	W1213 10:39:57.177230  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:57.177235  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:39:57.177305  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:57.202689  947325 cri.go:89] found id: ""
	I1213 10:39:57.202703  947325 logs.go:282] 0 containers: []
	W1213 10:39:57.202710  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:39:57.202715  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:39:57.202778  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:57.227567  947325 cri.go:89] found id: ""
	I1213 10:39:57.227581  947325 logs.go:282] 0 containers: []
	W1213 10:39:57.227588  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:39:57.227593  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:57.227651  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:57.257034  947325 cri.go:89] found id: ""
	I1213 10:39:57.257048  947325 logs.go:282] 0 containers: []
	W1213 10:39:57.257056  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:57.257061  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:57.257118  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:57.282238  947325 cri.go:89] found id: ""
	I1213 10:39:57.282251  947325 logs.go:282] 0 containers: []
	W1213 10:39:57.282258  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:57.282263  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:57.282321  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:57.308327  947325 cri.go:89] found id: ""
	I1213 10:39:57.308341  947325 logs.go:282] 0 containers: []
	W1213 10:39:57.308348  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:57.308353  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:57.308412  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:57.334174  947325 cri.go:89] found id: ""
	I1213 10:39:57.334188  947325 logs.go:282] 0 containers: []
	W1213 10:39:57.334196  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:57.334203  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:39:57.334214  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:57.365982  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:57.365997  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:57.438986  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:57.439007  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:57.454096  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:57.454113  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:57.539317  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:57.526904   16863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:57.527801   16863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:57.529755   16863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:57.530529   16863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:57.532282   16863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:57.526904   16863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:57.527801   16863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:57.529755   16863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:57.530529   16863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:57.532282   16863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:57.539330  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:39:57.539341  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:40:00.111211  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:40:00.161991  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:40:00.162066  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:40:00.278257  947325 cri.go:89] found id: ""
	I1213 10:40:00.278273  947325 logs.go:282] 0 containers: []
	W1213 10:40:00.278282  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:40:00.278288  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:40:00.278371  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:40:00.356421  947325 cri.go:89] found id: ""
	I1213 10:40:00.356441  947325 logs.go:282] 0 containers: []
	W1213 10:40:00.356449  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:40:00.356459  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:40:00.356542  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:40:00.426855  947325 cri.go:89] found id: ""
	I1213 10:40:00.426872  947325 logs.go:282] 0 containers: []
	W1213 10:40:00.426880  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:40:00.426887  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:40:00.426962  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:40:00.484844  947325 cri.go:89] found id: ""
	I1213 10:40:00.484860  947325 logs.go:282] 0 containers: []
	W1213 10:40:00.484868  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:40:00.484874  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:40:00.484945  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:40:00.602425  947325 cri.go:89] found id: ""
	I1213 10:40:00.602444  947325 logs.go:282] 0 containers: []
	W1213 10:40:00.602452  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:40:00.602465  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:40:00.602545  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:40:00.682272  947325 cri.go:89] found id: ""
	I1213 10:40:00.682288  947325 logs.go:282] 0 containers: []
	W1213 10:40:00.682297  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:40:00.682303  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:40:00.682377  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:40:00.717455  947325 cri.go:89] found id: ""
	I1213 10:40:00.717470  947325 logs.go:282] 0 containers: []
	W1213 10:40:00.717478  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:40:00.717486  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:40:00.717498  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:40:00.751785  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:40:00.751805  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:40:00.823234  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:40:00.823256  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:40:00.840067  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:40:00.840092  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:40:00.911938  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:40:00.902907   16969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:00.903639   16969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:00.905343   16969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:00.905895   16969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:00.907562   16969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:40:00.902907   16969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:00.903639   16969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:00.905343   16969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:00.905895   16969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:00.907562   16969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:40:00.911995  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:40:00.912005  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:40:03.480277  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:40:03.490777  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:40:03.490839  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:40:03.516535  947325 cri.go:89] found id: ""
	I1213 10:40:03.516549  947325 logs.go:282] 0 containers: []
	W1213 10:40:03.516556  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:40:03.516561  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:40:03.516630  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:40:03.543061  947325 cri.go:89] found id: ""
	I1213 10:40:03.543075  947325 logs.go:282] 0 containers: []
	W1213 10:40:03.543083  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:40:03.543088  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:40:03.543149  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:40:03.569136  947325 cri.go:89] found id: ""
	I1213 10:40:03.569150  947325 logs.go:282] 0 containers: []
	W1213 10:40:03.569158  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:40:03.569163  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:40:03.569222  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:40:03.596417  947325 cri.go:89] found id: ""
	I1213 10:40:03.596431  947325 logs.go:282] 0 containers: []
	W1213 10:40:03.596438  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:40:03.596443  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:40:03.596510  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:40:03.624475  947325 cri.go:89] found id: ""
	I1213 10:40:03.624489  947325 logs.go:282] 0 containers: []
	W1213 10:40:03.624496  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:40:03.624501  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:40:03.624560  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:40:03.650480  947325 cri.go:89] found id: ""
	I1213 10:40:03.650495  947325 logs.go:282] 0 containers: []
	W1213 10:40:03.650509  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:40:03.650515  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:40:03.650574  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:40:03.679244  947325 cri.go:89] found id: ""
	I1213 10:40:03.679258  947325 logs.go:282] 0 containers: []
	W1213 10:40:03.679265  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:40:03.679272  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:40:03.679283  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:40:03.752004  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:40:03.742428   17052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:03.743353   17052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:03.744776   17052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:03.745390   17052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:03.747857   17052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:40:03.742428   17052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:03.743353   17052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:03.744776   17052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:03.745390   17052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:03.747857   17052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:40:03.752014  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:40:03.752025  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:40:03.833866  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:40:03.833888  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:40:03.863364  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:40:03.863381  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:40:03.930202  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:40:03.930230  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:40:06.446850  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:40:06.456936  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:40:06.457005  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:40:06.481624  947325 cri.go:89] found id: ""
	I1213 10:40:06.481638  947325 logs.go:282] 0 containers: []
	W1213 10:40:06.481645  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:40:06.481653  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:40:06.481709  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:40:06.510312  947325 cri.go:89] found id: ""
	I1213 10:40:06.510335  947325 logs.go:282] 0 containers: []
	W1213 10:40:06.510342  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:40:06.510347  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:40:06.510408  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:40:06.541422  947325 cri.go:89] found id: ""
	I1213 10:40:06.541439  947325 logs.go:282] 0 containers: []
	W1213 10:40:06.541446  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:40:06.541451  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:40:06.541511  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:40:06.567745  947325 cri.go:89] found id: ""
	I1213 10:40:06.567759  947325 logs.go:282] 0 containers: []
	W1213 10:40:06.567766  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:40:06.567771  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:40:06.567827  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:40:06.593070  947325 cri.go:89] found id: ""
	I1213 10:40:06.593085  947325 logs.go:282] 0 containers: []
	W1213 10:40:06.593092  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:40:06.593097  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:40:06.593159  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:40:06.620092  947325 cri.go:89] found id: ""
	I1213 10:40:06.620106  947325 logs.go:282] 0 containers: []
	W1213 10:40:06.620114  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:40:06.620119  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:40:06.620180  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:40:06.646655  947325 cri.go:89] found id: ""
	I1213 10:40:06.646668  947325 logs.go:282] 0 containers: []
	W1213 10:40:06.646676  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:40:06.646684  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:40:06.646695  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:40:06.713111  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:40:06.713133  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:40:06.729687  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:40:06.729703  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:40:06.811226  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:40:06.802029   17169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:06.803349   17169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:06.804038   17169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:06.805655   17169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:06.806271   17169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:40:06.802029   17169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:06.803349   17169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:06.804038   17169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:06.805655   17169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:06.806271   17169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:40:06.811237  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:40:06.811252  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:40:06.879267  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:40:06.879290  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:40:09.408425  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:40:09.418903  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:40:09.418973  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:40:09.445864  947325 cri.go:89] found id: ""
	I1213 10:40:09.445878  947325 logs.go:282] 0 containers: []
	W1213 10:40:09.445886  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:40:09.445891  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:40:09.445953  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:40:09.477028  947325 cri.go:89] found id: ""
	I1213 10:40:09.477042  947325 logs.go:282] 0 containers: []
	W1213 10:40:09.477049  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:40:09.477054  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:40:09.477114  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:40:09.503739  947325 cri.go:89] found id: ""
	I1213 10:40:09.503754  947325 logs.go:282] 0 containers: []
	W1213 10:40:09.503761  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:40:09.503766  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:40:09.503830  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:40:09.530433  947325 cri.go:89] found id: ""
	I1213 10:40:09.530449  947325 logs.go:282] 0 containers: []
	W1213 10:40:09.530458  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:40:09.530463  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:40:09.530527  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:40:09.557391  947325 cri.go:89] found id: ""
	I1213 10:40:09.557406  947325 logs.go:282] 0 containers: []
	W1213 10:40:09.557413  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:40:09.557424  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:40:09.557488  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:40:09.583991  947325 cri.go:89] found id: ""
	I1213 10:40:09.584006  947325 logs.go:282] 0 containers: []
	W1213 10:40:09.584014  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:40:09.584020  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:40:09.584084  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:40:09.610671  947325 cri.go:89] found id: ""
	I1213 10:40:09.610685  947325 logs.go:282] 0 containers: []
	W1213 10:40:09.610692  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:40:09.610701  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:40:09.610712  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:40:09.626022  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:40:09.626039  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:40:09.693054  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:40:09.684419   17265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:09.685112   17265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:09.686796   17265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:09.687319   17265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:09.689067   17265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:40:09.684419   17265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:09.685112   17265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:09.686796   17265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:09.687319   17265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:09.689067   17265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:40:09.693064  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:40:09.693077  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:40:09.767666  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:40:09.767694  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:40:09.799935  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:40:09.799953  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:40:12.366822  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:40:12.377676  947325 kubeadm.go:602] duration metric: took 4m2.920144703s to restartPrimaryControlPlane
	W1213 10:40:12.377740  947325 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1213 10:40:12.377825  947325 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1213 10:40:12.791103  947325 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 10:40:12.803671  947325 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 10:40:12.811334  947325 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 10:40:12.811389  947325 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 10:40:12.818912  947325 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 10:40:12.818922  947325 kubeadm.go:158] found existing configuration files:
	
	I1213 10:40:12.818976  947325 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1213 10:40:12.826986  947325 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 10:40:12.827043  947325 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 10:40:12.834424  947325 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1213 10:40:12.842053  947325 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 10:40:12.842110  947325 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 10:40:12.849745  947325 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1213 10:40:12.857650  947325 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 10:40:12.857707  947325 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 10:40:12.865223  947325 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1213 10:40:12.873255  947325 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 10:40:12.873315  947325 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 10:40:12.881016  947325 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 10:40:12.922045  947325 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 10:40:12.922134  947325 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 10:40:13.007876  947325 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 10:40:13.007942  947325 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 10:40:13.007977  947325 kubeadm.go:319] OS: Linux
	I1213 10:40:13.008021  947325 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 10:40:13.008068  947325 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 10:40:13.008115  947325 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 10:40:13.008162  947325 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 10:40:13.008210  947325 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 10:40:13.008257  947325 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 10:40:13.008305  947325 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 10:40:13.008352  947325 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 10:40:13.008397  947325 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 10:40:13.081346  947325 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 10:40:13.081472  947325 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 10:40:13.081605  947325 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 10:40:13.089963  947325 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 10:40:13.093587  947325 out.go:252]   - Generating certificates and keys ...
	I1213 10:40:13.093699  947325 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 10:40:13.093775  947325 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 10:40:13.093883  947325 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 10:40:13.093964  947325 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1213 10:40:13.094047  947325 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 10:40:13.094113  947325 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1213 10:40:13.094188  947325 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1213 10:40:13.094255  947325 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1213 10:40:13.094334  947325 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 10:40:13.094412  947325 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 10:40:13.094451  947325 kubeadm.go:319] [certs] Using the existing "sa" key
	I1213 10:40:13.094511  947325 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 10:40:13.317953  947325 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 10:40:13.628016  947325 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 10:40:13.956341  947325 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 10:40:14.391056  947325 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 10:40:14.663244  947325 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 10:40:14.663900  947325 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 10:40:14.666642  947325 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 10:40:14.670022  947325 out.go:252]   - Booting up control plane ...
	I1213 10:40:14.670125  947325 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 10:40:14.670202  947325 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 10:40:14.670267  947325 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 10:40:14.685196  947325 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 10:40:14.685574  947325 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 10:40:14.692785  947325 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 10:40:14.693070  947325 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 10:40:14.693112  947325 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 10:40:14.837275  947325 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 10:40:14.837410  947325 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 10:44:14.836045  947325 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00023703s
	I1213 10:44:14.836071  947325 kubeadm.go:319] 
	I1213 10:44:14.836328  947325 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 10:44:14.836386  947325 kubeadm.go:319] 	- The kubelet is not running
	I1213 10:44:14.836565  947325 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 10:44:14.836573  947325 kubeadm.go:319] 
	I1213 10:44:14.836751  947325 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 10:44:14.837048  947325 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 10:44:14.837101  947325 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 10:44:14.837105  947325 kubeadm.go:319] 
	I1213 10:44:14.841975  947325 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 10:44:14.842445  947325 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 10:44:14.842565  947325 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 10:44:14.842818  947325 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1213 10:44:14.842823  947325 kubeadm.go:319] 
	I1213 10:44:14.842900  947325 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1213 10:44:14.842999  947325 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00023703s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1213 10:44:14.843084  947325 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1213 10:44:15.255135  947325 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 10:44:15.268065  947325 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 10:44:15.268119  947325 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 10:44:15.276039  947325 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 10:44:15.276049  947325 kubeadm.go:158] found existing configuration files:
	
	I1213 10:44:15.276099  947325 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1213 10:44:15.283960  947325 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 10:44:15.284017  947325 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 10:44:15.291479  947325 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1213 10:44:15.299068  947325 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 10:44:15.299125  947325 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 10:44:15.306780  947325 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1213 10:44:15.314429  947325 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 10:44:15.314486  947325 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 10:44:15.321813  947325 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1213 10:44:15.329258  947325 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 10:44:15.329313  947325 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 10:44:15.337109  947325 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 10:44:15.375292  947325 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 10:44:15.375341  947325 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 10:44:15.450506  947325 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 10:44:15.450577  947325 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 10:44:15.450617  947325 kubeadm.go:319] OS: Linux
	I1213 10:44:15.450661  947325 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 10:44:15.450708  947325 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 10:44:15.450754  947325 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 10:44:15.450800  947325 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 10:44:15.450849  947325 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 10:44:15.450900  947325 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 10:44:15.450944  947325 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 10:44:15.450990  947325 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 10:44:15.451035  947325 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 10:44:15.530795  947325 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 10:44:15.530912  947325 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 10:44:15.531008  947325 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 10:44:15.540322  947325 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 10:44:15.543642  947325 out.go:252]   - Generating certificates and keys ...
	I1213 10:44:15.543721  947325 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 10:44:15.543784  947325 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 10:44:15.543859  947325 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 10:44:15.543918  947325 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1213 10:44:15.543987  947325 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 10:44:15.544039  947325 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1213 10:44:15.544101  947325 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1213 10:44:15.544161  947325 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1213 10:44:15.544244  947325 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 10:44:15.544319  947325 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 10:44:15.544391  947325 kubeadm.go:319] [certs] Using the existing "sa" key
	I1213 10:44:15.544447  947325 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 10:44:15.880761  947325 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 10:44:16.054505  947325 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 10:44:16.157902  947325 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 10:44:16.328847  947325 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 10:44:16.490203  947325 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 10:44:16.491055  947325 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 10:44:16.493708  947325 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 10:44:16.496861  947325 out.go:252]   - Booting up control plane ...
	I1213 10:44:16.496957  947325 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 10:44:16.497033  947325 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 10:44:16.497100  947325 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 10:44:16.511097  947325 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 10:44:16.511202  947325 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 10:44:16.518811  947325 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 10:44:16.519350  947325 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 10:44:16.519584  947325 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 10:44:16.652368  947325 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 10:44:16.652480  947325 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 10:48:16.653403  947325 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001096364s
	I1213 10:48:16.653421  947325 kubeadm.go:319] 
	I1213 10:48:16.653477  947325 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 10:48:16.653510  947325 kubeadm.go:319] 	- The kubelet is not running
	I1213 10:48:16.653633  947325 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 10:48:16.653637  947325 kubeadm.go:319] 
	I1213 10:48:16.653740  947325 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 10:48:16.653771  947325 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 10:48:16.653801  947325 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 10:48:16.653804  947325 kubeadm.go:319] 
	I1213 10:48:16.659039  947325 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 10:48:16.659521  947325 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 10:48:16.659636  947325 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 10:48:16.659899  947325 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1213 10:48:16.659915  947325 kubeadm.go:319] 
	I1213 10:48:16.659983  947325 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1213 10:48:16.660039  947325 kubeadm.go:403] duration metric: took 12m7.242563635s to StartCluster
	I1213 10:48:16.660068  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:48:16.660127  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:48:16.684783  947325 cri.go:89] found id: ""
	I1213 10:48:16.684798  947325 logs.go:282] 0 containers: []
	W1213 10:48:16.684805  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:48:16.684810  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:48:16.684871  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:48:16.709976  947325 cri.go:89] found id: ""
	I1213 10:48:16.709990  947325 logs.go:282] 0 containers: []
	W1213 10:48:16.709997  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:48:16.710001  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:48:16.710060  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:48:16.735338  947325 cri.go:89] found id: ""
	I1213 10:48:16.735351  947325 logs.go:282] 0 containers: []
	W1213 10:48:16.735358  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:48:16.735363  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:48:16.735422  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:48:16.760771  947325 cri.go:89] found id: ""
	I1213 10:48:16.760784  947325 logs.go:282] 0 containers: []
	W1213 10:48:16.760791  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:48:16.760797  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:48:16.760851  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:48:16.785193  947325 cri.go:89] found id: ""
	I1213 10:48:16.785207  947325 logs.go:282] 0 containers: []
	W1213 10:48:16.785215  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:48:16.785220  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:48:16.785280  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:48:16.811008  947325 cri.go:89] found id: ""
	I1213 10:48:16.811022  947325 logs.go:282] 0 containers: []
	W1213 10:48:16.811029  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:48:16.811034  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:48:16.811093  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:48:16.840077  947325 cri.go:89] found id: ""
	I1213 10:48:16.840092  947325 logs.go:282] 0 containers: []
	W1213 10:48:16.840099  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:48:16.840119  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:48:16.840130  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:48:16.909363  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:48:16.909386  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:48:16.924416  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:48:16.924438  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:48:17.001976  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:48:16.991502   21073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:48:16.992681   21073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:48:16.993581   21073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:48:16.995339   21073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:48:16.995963   21073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:48:16.991502   21073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:48:16.992681   21073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:48:16.993581   21073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:48:16.995339   21073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:48:16.995963   21073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:48:17.001987  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:48:17.001997  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:48:17.083059  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:48:17.083078  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 10:48:17.113855  947325 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001096364s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1213 10:48:17.113886  947325 out.go:285] * 
	W1213 10:48:17.113944  947325 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001096364s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 10:48:17.113961  947325 out.go:285] * 
	W1213 10:48:17.116079  947325 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 10:48:17.121140  947325 out.go:203] 
	W1213 10:48:17.123914  947325 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001096364s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 10:48:17.123972  947325 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1213 10:48:17.123993  947325 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1213 10:48:17.128861  947325 out.go:203] 
	
	
	==> CRI-O <==
	Dec 13 10:36:08 functional-200955 crio[9915]: time="2025-12-13T10:36:08.290540792Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 13 10:36:08 functional-200955 crio[9915]: time="2025-12-13T10:36:08.290575401Z" level=info msg="Starting seccomp notifier watcher"
	Dec 13 10:36:08 functional-200955 crio[9915]: time="2025-12-13T10:36:08.290616288Z" level=info msg="Create NRI interface"
	Dec 13 10:36:08 functional-200955 crio[9915]: time="2025-12-13T10:36:08.291085281Z" level=info msg="built-in NRI default validator is disabled"
	Dec 13 10:36:08 functional-200955 crio[9915]: time="2025-12-13T10:36:08.291114401Z" level=info msg="runtime interface created"
	Dec 13 10:36:08 functional-200955 crio[9915]: time="2025-12-13T10:36:08.291129622Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 13 10:36:08 functional-200955 crio[9915]: time="2025-12-13T10:36:08.291142299Z" level=info msg="runtime interface starting up..."
	Dec 13 10:36:08 functional-200955 crio[9915]: time="2025-12-13T10:36:08.291148937Z" level=info msg="starting plugins..."
	Dec 13 10:36:08 functional-200955 crio[9915]: time="2025-12-13T10:36:08.291165938Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 13 10:36:08 functional-200955 crio[9915]: time="2025-12-13T10:36:08.291236782Z" level=info msg="No systemd watchdog enabled"
	Dec 13 10:36:08 functional-200955 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 13 10:40:13 functional-200955 crio[9915]: time="2025-12-13T10:40:13.084834397Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=b5efff79-46eb-41f2-bde4-db3ba9dab38c name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:40:13 functional-200955 crio[9915]: time="2025-12-13T10:40:13.08566844Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=5615dd29-1801-45cf-b9ec-bc2670925ce8 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:40:13 functional-200955 crio[9915]: time="2025-12-13T10:40:13.086277701Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=27142b95-3cc3-4adb-a2df-9868044a9998 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:40:13 functional-200955 crio[9915]: time="2025-12-13T10:40:13.086727642Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=22595c5f-3db5-4062-8e04-cb17f6bc794b name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:40:13 functional-200955 crio[9915]: time="2025-12-13T10:40:13.087217057Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=4772ea3e-d27c-4029-bb8e-c23e148a40e4 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:40:13 functional-200955 crio[9915]: time="2025-12-13T10:40:13.08768738Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=9219a2d6-ec51-448e-87c0-444e5d98b53a name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:40:13 functional-200955 crio[9915]: time="2025-12-13T10:40:13.088157391Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=a67c2fca-67f5-45c5-89da-71309b05610c name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:44:15 functional-200955 crio[9915]: time="2025-12-13T10:44:15.534115746Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=3049b4b3-14f8-431e-ab4d-c6efa4a37dac name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:44:15 functional-200955 crio[9915]: time="2025-12-13T10:44:15.535316398Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=aca0cbac-b5e4-4959-a768-b532f9c78063 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:44:15 functional-200955 crio[9915]: time="2025-12-13T10:44:15.53607634Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=4f458fd4-6b17-4cc2-8b0b-32f7a700d5d6 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:44:15 functional-200955 crio[9915]: time="2025-12-13T10:44:15.536719579Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=93897364-2c92-4299-ac1e-dfb20638840a name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:44:15 functional-200955 crio[9915]: time="2025-12-13T10:44:15.538084483Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=80c3abea-faad-48a9-8be1-ff63680847aa name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:44:15 functional-200955 crio[9915]: time="2025-12-13T10:44:15.538942002Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=9f779e3b-3069-476b-9013-f486002774b8 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:44:15 functional-200955 crio[9915]: time="2025-12-13T10:44:15.539437793Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=de504892-ee6f-46b3-8ac6-2712427d6188 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:50:21.707810   23245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:21.708371   23245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:21.709909   23245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:21.710344   23245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:21.711781   23245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec13 10:10] kauditd_printk_skb: 8 callbacks suppressed
	[Dec13 10:11] overlayfs: idmapped layers are currently not supported
	[  +0.076161] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec13 10:17] overlayfs: idmapped layers are currently not supported
	[Dec13 10:18] overlayfs: idmapped layers are currently not supported
	[Dec13 10:35] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 10:50:21 up  5:32,  0 user,  load average: 0.32, 0.26, 0.50
	Linux functional-200955 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 10:50:19 functional-200955 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:50:19 functional-200955 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1124.
	Dec 13 10:50:19 functional-200955 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:50:19 functional-200955 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:50:20 functional-200955 kubelet[23109]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 10:50:20 functional-200955 kubelet[23109]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 10:50:20 functional-200955 kubelet[23109]: E1213 10:50:20.044992   23109 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:50:20 functional-200955 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:50:20 functional-200955 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:50:20 functional-200955 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1125.
	Dec 13 10:50:20 functional-200955 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:50:20 functional-200955 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:50:20 functional-200955 kubelet[23152]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 10:50:20 functional-200955 kubelet[23152]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 10:50:20 functional-200955 kubelet[23152]: E1213 10:50:20.799604   23152 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:50:20 functional-200955 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:50:20 functional-200955 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:50:21 functional-200955 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1126.
	Dec 13 10:50:21 functional-200955 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:50:21 functional-200955 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:50:21 functional-200955 kubelet[23202]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 10:50:21 functional-200955 kubelet[23202]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 10:50:21 functional-200955 kubelet[23202]: E1213 10:50:21.525865   23202 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:50:21 functional-200955 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:50:21 functional-200955 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-200955 -n functional-200955
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-200955 -n functional-200955: exit status 2 (342.936362ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-200955" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (3.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (2.34s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-200955 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1636: (dbg) Non-zero exit: kubectl --context functional-200955 create deployment hello-node-connect --image kicbase/echo-server: exit status 1 (54.695382ms)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://192.168.49.2:8441/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test.go:1638: failed to create hello-node deployment with this command "kubectl --context functional-200955 create deployment hello-node-connect --image kicbase/echo-server": exit status 1.
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-200955 describe po hello-node-connect
functional_test.go:1612: (dbg) Non-zero exit: kubectl --context functional-200955 describe po hello-node-connect: exit status 1 (62.883692ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1614: "kubectl --context functional-200955 describe po hello-node-connect" failed: exit status 1
functional_test.go:1616: hello-node pod describe:
functional_test.go:1618: (dbg) Run:  kubectl --context functional-200955 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-200955 logs -l app=hello-node-connect: exit status 1 (57.538784ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-200955 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-200955 describe svc hello-node-connect
functional_test.go:1624: (dbg) Non-zero exit: kubectl --context functional-200955 describe svc hello-node-connect: exit status 1 (60.838937ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1626: "kubectl --context functional-200955 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1628: hello-node svc describe:
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-200955
helpers_test.go:244: (dbg) docker inspect functional-200955:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8d53cd00da87a9082532fc43ffe108cc46c100c4d52d598de1abc5c03d05f0a2",
	        "Created": "2025-12-13T10:21:24.063231347Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 935996,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T10:21:24.120776444Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/8d53cd00da87a9082532fc43ffe108cc46c100c4d52d598de1abc5c03d05f0a2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8d53cd00da87a9082532fc43ffe108cc46c100c4d52d598de1abc5c03d05f0a2/hostname",
	        "HostsPath": "/var/lib/docker/containers/8d53cd00da87a9082532fc43ffe108cc46c100c4d52d598de1abc5c03d05f0a2/hosts",
	        "LogPath": "/var/lib/docker/containers/8d53cd00da87a9082532fc43ffe108cc46c100c4d52d598de1abc5c03d05f0a2/8d53cd00da87a9082532fc43ffe108cc46c100c4d52d598de1abc5c03d05f0a2-json.log",
	        "Name": "/functional-200955",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-200955:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-200955",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8d53cd00da87a9082532fc43ffe108cc46c100c4d52d598de1abc5c03d05f0a2",
	                "LowerDir": "/var/lib/docker/overlay2/88eeb10fcc1b499e31bc1f6077feaba1c1c5b1a510066e3c5f3c7fab1032a7a8-init/diff:/var/lib/docker/overlay2/ae644fe0cc2841f5eea1cee1fab5fa62406b5368ff2c4f1e7ca42815e94a37ad/diff",
	                "MergedDir": "/var/lib/docker/overlay2/88eeb10fcc1b499e31bc1f6077feaba1c1c5b1a510066e3c5f3c7fab1032a7a8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/88eeb10fcc1b499e31bc1f6077feaba1c1c5b1a510066e3c5f3c7fab1032a7a8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/88eeb10fcc1b499e31bc1f6077feaba1c1c5b1a510066e3c5f3c7fab1032a7a8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-200955",
	                "Source": "/var/lib/docker/volumes/functional-200955/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-200955",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-200955",
	                "name.minikube.sigs.k8s.io": "functional-200955",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "766cddaf684c9eda3444b59c94594c94772112ec8d9beb3bf9ab0dee27a031f7",
	            "SandboxKey": "/var/run/docker/netns/766cddaf684c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33523"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33524"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33527"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33525"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33526"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-200955": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "26:41:8f:b5:13:ba",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "cc1684d1fcbfd40cf35af7d1687322fe1e1f6c4d0d51bbc510daab317bab57d4",
	                    "EndpointID": "480d7cd674d03dbe8a8b029c866cc993844939c5b39aa63c9b0d9188a61c29a3",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-200955",
	                        "8d53cd00da87"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-200955 -n functional-200955
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-200955 -n functional-200955: exit status 2 (347.01764ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-200955 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                            ARGS                                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cache   │ functional-200955 cache reload                                                                                                                             │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:35 UTC │ 13 Dec 25 10:35 UTC │
	│ ssh     │ functional-200955 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                    │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:35 UTC │ 13 Dec 25 10:35 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                           │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 10:35 UTC │ 13 Dec 25 10:35 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 10:35 UTC │ 13 Dec 25 10:35 UTC │
	│ kubectl │ functional-200955 kubectl -- --context functional-200955 get pods                                                                                          │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:35 UTC │                     │
	│ start   │ -p functional-200955 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                                   │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:36 UTC │                     │
	│ config  │ functional-200955 config unset cpus                                                                                                                        │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:48 UTC │ 13 Dec 25 10:48 UTC │
	│ cp      │ functional-200955 cp testdata/cp-test.txt /home/docker/cp-test.txt                                                                                         │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:48 UTC │ 13 Dec 25 10:48 UTC │
	│ config  │ functional-200955 config get cpus                                                                                                                          │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:48 UTC │                     │
	│ config  │ functional-200955 config set cpus 2                                                                                                                        │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:48 UTC │ 13 Dec 25 10:48 UTC │
	│ config  │ functional-200955 config get cpus                                                                                                                          │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:48 UTC │ 13 Dec 25 10:48 UTC │
	│ config  │ functional-200955 config unset cpus                                                                                                                        │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:48 UTC │ 13 Dec 25 10:48 UTC │
	│ ssh     │ functional-200955 ssh -n functional-200955 sudo cat /home/docker/cp-test.txt                                                                               │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:48 UTC │ 13 Dec 25 10:48 UTC │
	│ config  │ functional-200955 config get cpus                                                                                                                          │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:48 UTC │                     │
	│ ssh     │ functional-200955 ssh echo hello                                                                                                                           │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:48 UTC │ 13 Dec 25 10:48 UTC │
	│ cp      │ functional-200955 cp functional-200955:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp11267318/001/cp-test.txt │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:48 UTC │ 13 Dec 25 10:48 UTC │
	│ ssh     │ functional-200955 ssh cat /etc/hostname                                                                                                                    │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:48 UTC │ 13 Dec 25 10:48 UTC │
	│ ssh     │ functional-200955 ssh -n functional-200955 sudo cat /home/docker/cp-test.txt                                                                               │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:48 UTC │ 13 Dec 25 10:48 UTC │
	│ tunnel  │ functional-200955 tunnel --alsologtostderr                                                                                                                 │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:48 UTC │                     │
	│ tunnel  │ functional-200955 tunnel --alsologtostderr                                                                                                                 │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:48 UTC │                     │
	│ cp      │ functional-200955 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                                                  │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:48 UTC │ 13 Dec 25 10:48 UTC │
	│ tunnel  │ functional-200955 tunnel --alsologtostderr                                                                                                                 │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:48 UTC │                     │
	│ ssh     │ functional-200955 ssh -n functional-200955 sudo cat /tmp/does/not/exist/cp-test.txt                                                                        │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:48 UTC │ 13 Dec 25 10:48 UTC │
	│ addons  │ functional-200955 addons list                                                                                                                              │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:50 UTC │ 13 Dec 25 10:50 UTC │
	│ addons  │ functional-200955 addons list -o json                                                                                                                      │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:50 UTC │ 13 Dec 25 10:50 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 10:36:05
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 10:36:05.024663  947325 out.go:360] Setting OutFile to fd 1 ...
	I1213 10:36:05.024857  947325 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:36:05.024862  947325 out.go:374] Setting ErrFile to fd 2...
	I1213 10:36:05.024867  947325 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:36:05.025148  947325 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-904040/.minikube/bin
	I1213 10:36:05.025578  947325 out.go:368] Setting JSON to false
	I1213 10:36:05.026512  947325 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":19114,"bootTime":1765603051,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1213 10:36:05.026573  947325 start.go:143] virtualization:  
	I1213 10:36:05.030119  947325 out.go:179] * [functional-200955] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 10:36:05.033180  947325 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 10:36:05.033273  947325 notify.go:221] Checking for updates...
	I1213 10:36:05.036966  947325 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 10:36:05.041647  947325 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22128-904040/kubeconfig
	I1213 10:36:05.044535  947325 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-904040/.minikube
	I1213 10:36:05.047483  947325 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 10:36:05.050413  947325 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 10:36:05.053885  947325 config.go:182] Loaded profile config "functional-200955": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 10:36:05.053982  947325 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 10:36:05.081037  947325 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 10:36:05.081166  947325 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:36:05.151201  947325 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-13 10:36:05.14075062 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:36:05.151307  947325 docker.go:319] overlay module found
	I1213 10:36:05.154359  947325 out.go:179] * Using the docker driver based on existing profile
	I1213 10:36:05.157187  947325 start.go:309] selected driver: docker
	I1213 10:36:05.157194  947325 start.go:927] validating driver "docker" against &{Name:functional-200955 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-200955 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:36:05.157283  947325 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 10:36:05.157388  947325 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:36:05.214971  947325 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-13 10:36:05.204866403 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:36:05.215380  947325 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 10:36:05.215410  947325 cni.go:84] Creating CNI manager for ""
	I1213 10:36:05.215457  947325 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 10:36:05.215500  947325 start.go:353] cluster config:
	{Name:functional-200955 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-200955 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:36:05.218694  947325 out.go:179] * Starting "functional-200955" primary control-plane node in "functional-200955" cluster
	I1213 10:36:05.221699  947325 cache.go:134] Beginning downloading kic base image for docker with crio
	I1213 10:36:05.224563  947325 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 10:36:05.227409  947325 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 10:36:05.227448  947325 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22128-904040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1213 10:36:05.227455  947325 cache.go:65] Caching tarball of preloaded images
	I1213 10:36:05.227491  947325 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 10:36:05.227538  947325 preload.go:238] Found /home/jenkins/minikube-integration/22128-904040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1213 10:36:05.227551  947325 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1213 10:36:05.227666  947325 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/config.json ...
	I1213 10:36:05.247494  947325 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 10:36:05.247505  947325 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 10:36:05.247518  947325 cache.go:243] Successfully downloaded all kic artifacts
	I1213 10:36:05.247549  947325 start.go:360] acquireMachinesLock for functional-200955: {Name:mkc5e96275d9db4dc69c44a1e3c60b6575a1e73a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 10:36:05.247604  947325 start.go:364] duration metric: took 37.317µs to acquireMachinesLock for "functional-200955"
	I1213 10:36:05.247623  947325 start.go:96] Skipping create...Using existing machine configuration
	I1213 10:36:05.247627  947325 fix.go:54] fixHost starting: 
	I1213 10:36:05.247894  947325 cli_runner.go:164] Run: docker container inspect functional-200955 --format={{.State.Status}}
	I1213 10:36:05.265053  947325 fix.go:112] recreateIfNeeded on functional-200955: state=Running err=<nil>
	W1213 10:36:05.265102  947325 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 10:36:05.268458  947325 out.go:252] * Updating the running docker "functional-200955" container ...
	I1213 10:36:05.268485  947325 machine.go:94] provisionDockerMachine start ...
	I1213 10:36:05.268569  947325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-200955
	I1213 10:36:05.285699  947325 main.go:143] libmachine: Using SSH client type: native
	I1213 10:36:05.286021  947325 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33523 <nil> <nil>}
	I1213 10:36:05.286027  947325 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 10:36:05.433614  947325 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-200955
	
	I1213 10:36:05.433628  947325 ubuntu.go:182] provisioning hostname "functional-200955"
	I1213 10:36:05.433698  947325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-200955
	I1213 10:36:05.452166  947325 main.go:143] libmachine: Using SSH client type: native
	I1213 10:36:05.452470  947325 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33523 <nil> <nil>}
	I1213 10:36:05.452478  947325 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-200955 && echo "functional-200955" | sudo tee /etc/hostname
	I1213 10:36:05.611951  947325 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-200955
	
	I1213 10:36:05.612044  947325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-200955
	I1213 10:36:05.630892  947325 main.go:143] libmachine: Using SSH client type: native
	I1213 10:36:05.631191  947325 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33523 <nil> <nil>}
	I1213 10:36:05.631205  947325 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-200955' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-200955/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-200955' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 10:36:05.782771  947325 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 10:36:05.782787  947325 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22128-904040/.minikube CaCertPath:/home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22128-904040/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22128-904040/.minikube}
	I1213 10:36:05.782810  947325 ubuntu.go:190] setting up certificates
	I1213 10:36:05.782824  947325 provision.go:84] configureAuth start
	I1213 10:36:05.782884  947325 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-200955
	I1213 10:36:05.800513  947325 provision.go:143] copyHostCerts
	I1213 10:36:05.800580  947325 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-904040/.minikube/ca.pem, removing ...
	I1213 10:36:05.800588  947325 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-904040/.minikube/ca.pem
	I1213 10:36:05.800662  947325 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22128-904040/.minikube/ca.pem (1082 bytes)
	I1213 10:36:05.800773  947325 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-904040/.minikube/cert.pem, removing ...
	I1213 10:36:05.800777  947325 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-904040/.minikube/cert.pem
	I1213 10:36:05.800802  947325 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22128-904040/.minikube/cert.pem (1123 bytes)
	I1213 10:36:05.800861  947325 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-904040/.minikube/key.pem, removing ...
	I1213 10:36:05.800865  947325 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-904040/.minikube/key.pem
	I1213 10:36:05.800887  947325 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22128-904040/.minikube/key.pem (1675 bytes)
	I1213 10:36:05.800938  947325 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22128-904040/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca-key.pem org=jenkins.functional-200955 san=[127.0.0.1 192.168.49.2 functional-200955 localhost minikube]
	I1213 10:36:06.162765  947325 provision.go:177] copyRemoteCerts
	I1213 10:36:06.162821  947325 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 10:36:06.162864  947325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-200955
	I1213 10:36:06.179964  947325 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33523 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/functional-200955/id_rsa Username:docker}
	I1213 10:36:06.285273  947325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 10:36:06.303138  947325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 10:36:06.321000  947325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 10:36:06.339159  947325 provision.go:87] duration metric: took 556.311814ms to configureAuth
	I1213 10:36:06.339177  947325 ubuntu.go:206] setting minikube options for container-runtime
	I1213 10:36:06.339382  947325 config.go:182] Loaded profile config "functional-200955": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 10:36:06.339492  947325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-200955
	I1213 10:36:06.357323  947325 main.go:143] libmachine: Using SSH client type: native
	I1213 10:36:06.357649  947325 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33523 <nil> <nil>}
	I1213 10:36:06.357662  947325 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 10:36:06.705283  947325 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 10:36:06.705297  947325 machine.go:97] duration metric: took 1.436804594s to provisionDockerMachine
	I1213 10:36:06.705307  947325 start.go:293] postStartSetup for "functional-200955" (driver="docker")
	I1213 10:36:06.705318  947325 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 10:36:06.705379  947325 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 10:36:06.705435  947325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-200955
	I1213 10:36:06.722886  947325 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33523 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/functional-200955/id_rsa Username:docker}
	I1213 10:36:06.829449  947325 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 10:36:06.832816  947325 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 10:36:06.832847  947325 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 10:36:06.832858  947325 filesync.go:126] Scanning /home/jenkins/minikube-integration/22128-904040/.minikube/addons for local assets ...
	I1213 10:36:06.832914  947325 filesync.go:126] Scanning /home/jenkins/minikube-integration/22128-904040/.minikube/files for local assets ...
	I1213 10:36:06.832996  947325 filesync.go:149] local asset: /home/jenkins/minikube-integration/22128-904040/.minikube/files/etc/ssl/certs/9074842.pem -> 9074842.pem in /etc/ssl/certs
	I1213 10:36:06.833088  947325 filesync.go:149] local asset: /home/jenkins/minikube-integration/22128-904040/.minikube/files/etc/test/nested/copy/907484/hosts -> hosts in /etc/test/nested/copy/907484
	I1213 10:36:06.833134  947325 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/907484
	I1213 10:36:06.840686  947325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/files/etc/ssl/certs/9074842.pem --> /etc/ssl/certs/9074842.pem (1708 bytes)
	I1213 10:36:06.859025  947325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/files/etc/test/nested/copy/907484/hosts --> /etc/test/nested/copy/907484/hosts (40 bytes)
	I1213 10:36:06.877717  947325 start.go:296] duration metric: took 172.395592ms for postStartSetup
	I1213 10:36:06.877814  947325 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 10:36:06.877857  947325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-200955
	I1213 10:36:06.896880  947325 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33523 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/functional-200955/id_rsa Username:docker}
	I1213 10:36:06.998897  947325 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 10:36:07.005668  947325 fix.go:56] duration metric: took 1.758032508s for fixHost
	I1213 10:36:07.005685  947325 start.go:83] releasing machines lock for "functional-200955", held for 1.758074248s
	I1213 10:36:07.005790  947325 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-200955
	I1213 10:36:07.024345  947325 ssh_runner.go:195] Run: cat /version.json
	I1213 10:36:07.024397  947325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-200955
	I1213 10:36:07.024410  947325 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 10:36:07.024473  947325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-200955
	I1213 10:36:07.045627  947325 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33523 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/functional-200955/id_rsa Username:docker}
	I1213 10:36:07.056017  947325 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33523 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/functional-200955/id_rsa Username:docker}
	I1213 10:36:07.235962  947325 ssh_runner.go:195] Run: systemctl --version
	I1213 10:36:07.243338  947325 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 10:36:07.293399  947325 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 10:36:07.297828  947325 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 10:36:07.297890  947325 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 10:36:07.305998  947325 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 10:36:07.306012  947325 start.go:496] detecting cgroup driver to use...
	I1213 10:36:07.306043  947325 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 10:36:07.306089  947325 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 10:36:07.321360  947325 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 10:36:07.334818  947325 docker.go:218] disabling cri-docker service (if available) ...
	I1213 10:36:07.334873  947325 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 10:36:07.350268  947325 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 10:36:07.363266  947325 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 10:36:07.482802  947325 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 10:36:07.601250  947325 docker.go:234] disabling docker service ...
	I1213 10:36:07.601314  947325 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 10:36:07.616649  947325 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 10:36:07.630193  947325 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 10:36:07.750803  947325 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 10:36:07.872755  947325 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 10:36:07.885775  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 10:36:07.901044  947325 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 10:36:07.901118  947325 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:36:07.910913  947325 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 10:36:07.910999  947325 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:36:07.920242  947325 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:36:07.929183  947325 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:36:07.938207  947325 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 10:36:07.946601  947325 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:36:07.956231  947325 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:36:07.964904  947325 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:36:07.974470  947325 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 10:36:07.983694  947325 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 10:36:07.992492  947325 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:36:08.121808  947325 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 10:36:08.297420  947325 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 10:36:08.297489  947325 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 10:36:08.301247  947325 start.go:564] Will wait 60s for crictl version
	I1213 10:36:08.301305  947325 ssh_runner.go:195] Run: which crictl
	I1213 10:36:08.304718  947325 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 10:36:08.329152  947325 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1213 10:36:08.329228  947325 ssh_runner.go:195] Run: crio --version
	I1213 10:36:08.358630  947325 ssh_runner.go:195] Run: crio --version
	I1213 10:36:08.393160  947325 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1213 10:36:08.396025  947325 cli_runner.go:164] Run: docker network inspect functional-200955 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 10:36:08.412435  947325 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1213 10:36:08.419349  947325 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1213 10:36:08.422234  947325 kubeadm.go:884] updating cluster {Name:functional-200955 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-200955 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 10:36:08.422367  947325 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 10:36:08.422431  947325 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 10:36:08.457237  947325 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 10:36:08.457249  947325 crio.go:433] Images already preloaded, skipping extraction
	I1213 10:36:08.457306  947325 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 10:36:08.483246  947325 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 10:36:08.483258  947325 cache_images.go:86] Images are preloaded, skipping loading
	I1213 10:36:08.483264  947325 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1213 10:36:08.483360  947325 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-200955 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-200955 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 10:36:08.483446  947325 ssh_runner.go:195] Run: crio config
	I1213 10:36:08.545147  947325 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1213 10:36:08.545173  947325 cni.go:84] Creating CNI manager for ""
	I1213 10:36:08.545183  947325 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 10:36:08.545197  947325 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 10:36:08.545221  947325 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-200955 NodeName:functional-200955 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfig
Opts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 10:36:08.545347  947325 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-200955"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 10:36:08.545423  947325 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 10:36:08.553515  947325 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 10:36:08.553607  947325 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 10:36:08.561293  947325 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1213 10:36:08.574385  947325 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 10:36:08.587429  947325 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2071 bytes)
	I1213 10:36:08.600337  947325 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1213 10:36:08.603994  947325 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:36:08.714374  947325 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 10:36:08.729978  947325 certs.go:69] Setting up /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955 for IP: 192.168.49.2
	I1213 10:36:08.729989  947325 certs.go:195] generating shared ca certs ...
	I1213 10:36:08.730004  947325 certs.go:227] acquiring lock for ca certs: {Name:mk8a4f8a0a31c02fdf751ce601bdbbea6f5a03e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:36:08.730137  947325 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22128-904040/.minikube/ca.key
	I1213 10:36:08.730179  947325 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22128-904040/.minikube/proxy-client-ca.key
	I1213 10:36:08.730184  947325 certs.go:257] generating profile certs ...
	I1213 10:36:08.730263  947325 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/client.key
	I1213 10:36:08.730310  947325 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/apiserver.key.8da389ed
	I1213 10:36:08.730347  947325 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/proxy-client.key
	I1213 10:36:08.730463  947325 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/907484.pem (1338 bytes)
	W1213 10:36:08.730496  947325 certs.go:480] ignoring /home/jenkins/minikube-integration/22128-904040/.minikube/certs/907484_empty.pem, impossibly tiny 0 bytes
	I1213 10:36:08.730503  947325 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 10:36:08.730557  947325 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca.pem (1082 bytes)
	I1213 10:36:08.730581  947325 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/cert.pem (1123 bytes)
	I1213 10:36:08.730604  947325 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/key.pem (1675 bytes)
	I1213 10:36:08.730645  947325 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-904040/.minikube/files/etc/ssl/certs/9074842.pem (1708 bytes)
	I1213 10:36:08.731237  947325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 10:36:08.752034  947325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 10:36:08.773437  947325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 10:36:08.794430  947325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 10:36:08.812223  947325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 10:36:08.829741  947325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 10:36:08.846903  947325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 10:36:08.865036  947325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 10:36:08.883435  947325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 10:36:08.901321  947325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/certs/907484.pem --> /usr/share/ca-certificates/907484.pem (1338 bytes)
	I1213 10:36:08.919555  947325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/files/etc/ssl/certs/9074842.pem --> /usr/share/ca-certificates/9074842.pem (1708 bytes)
	I1213 10:36:08.937123  947325 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 10:36:08.950079  947325 ssh_runner.go:195] Run: openssl version
	I1213 10:36:08.956456  947325 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9074842.pem
	I1213 10:36:08.964062  947325 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9074842.pem /etc/ssl/certs/9074842.pem
	I1213 10:36:08.971445  947325 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9074842.pem
	I1213 10:36:08.975220  947325 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 10:21 /usr/share/ca-certificates/9074842.pem
	I1213 10:36:08.975278  947325 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9074842.pem
	I1213 10:36:09.016546  947325 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 10:36:09.024284  947325 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:36:09.031776  947325 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 10:36:09.039308  947325 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:36:09.042991  947325 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 10:11 /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:36:09.043047  947325 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:36:09.084141  947325 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 10:36:09.091531  947325 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/907484.pem
	I1213 10:36:09.098770  947325 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/907484.pem /etc/ssl/certs/907484.pem
	I1213 10:36:09.106212  947325 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/907484.pem
	I1213 10:36:09.109989  947325 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 10:21 /usr/share/ca-certificates/907484.pem
	I1213 10:36:09.110044  947325 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/907484.pem
	I1213 10:36:09.153254  947325 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 10:36:09.160715  947325 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 10:36:09.164506  947325 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 10:36:09.205710  947325 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 10:36:09.247436  947325 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 10:36:09.288348  947325 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 10:36:09.331611  947325 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 10:36:09.374582  947325 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 10:36:09.417486  947325 kubeadm.go:401] StartCluster: {Name:functional-200955 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-200955 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:36:09.417589  947325 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 10:36:09.417682  947325 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 10:36:09.449632  947325 cri.go:89] found id: ""
	I1213 10:36:09.449706  947325 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 10:36:09.457511  947325 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 10:36:09.457521  947325 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 10:36:09.457596  947325 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 10:36:09.465280  947325 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 10:36:09.465840  947325 kubeconfig.go:125] found "functional-200955" server: "https://192.168.49.2:8441"
	I1213 10:36:09.467296  947325 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 10:36:09.475528  947325 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-13 10:21:33.398300096 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-13 10:36:08.597035311 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1213 10:36:09.475546  947325 kubeadm.go:1161] stopping kube-system containers ...
	I1213 10:36:09.475557  947325 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1213 10:36:09.475616  947325 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 10:36:09.507924  947325 cri.go:89] found id: ""
	I1213 10:36:09.508000  947325 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1213 10:36:09.528470  947325 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 10:36:09.536474  947325 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Dec 13 10:25 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5640 Dec 13 10:25 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec 13 10:25 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec 13 10:25 /etc/kubernetes/scheduler.conf
	
	I1213 10:36:09.536539  947325 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1213 10:36:09.544588  947325 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1213 10:36:09.552476  947325 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1213 10:36:09.552532  947325 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 10:36:09.560285  947325 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1213 10:36:09.567834  947325 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1213 10:36:09.567887  947325 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 10:36:09.575592  947325 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1213 10:36:09.583902  947325 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1213 10:36:09.583961  947325 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 10:36:09.591566  947325 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 10:36:09.599534  947325 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 10:36:09.647986  947325 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 10:36:11.096705  947325 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.44869318s)
	I1213 10:36:11.096768  947325 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1213 10:36:11.325396  947325 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 10:36:11.390971  947325 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1213 10:36:11.438539  947325 api_server.go:52] waiting for apiserver process to appear ...
	I1213 10:36:11.438613  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:11.939787  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:12.439662  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:12.939059  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:13.439009  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:13.939132  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:14.438804  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:14.939015  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:15.439388  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:15.939371  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:16.439364  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:16.939242  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:17.438810  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:17.938842  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:18.439574  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:18.939403  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:19.438808  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:19.938978  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:20.438838  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:20.938801  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:21.439702  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:21.938786  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:22.438810  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:22.938804  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:23.438760  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:23.939498  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:24.438837  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:24.939362  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:25.439492  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:25.939539  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:26.439316  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:26.939385  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:27.438813  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:27.938714  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:28.439704  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:28.938715  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:29.438702  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:29.938870  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:30.439316  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:30.939369  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:31.438789  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:31.938728  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:32.439400  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:32.938824  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:33.438805  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:33.938821  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:34.439650  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:34.939567  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:35.439266  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:35.938806  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:36.439740  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:36.938910  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:37.439180  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:37.939269  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:38.439067  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:38.938814  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:39.438987  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:39.939068  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:40.439485  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:40.939755  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:41.439569  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:41.939359  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:42.438799  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:42.939523  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:43.438794  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:43.939463  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:44.439348  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:44.938832  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:45.439512  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:45.939368  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:46.439415  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:46.938802  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:47.439348  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:47.938774  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:48.439499  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:48.938765  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:49.438815  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:49.939489  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:50.439425  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:50.938961  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:51.438899  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:51.938980  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:52.438768  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:52.939488  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:53.438802  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:53.938784  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:54.439567  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:54.939001  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:55.439034  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:55.939017  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:56.438830  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:56.938984  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:57.438956  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:57.939727  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:58.439422  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:58.938982  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:59.438715  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:36:59.938800  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:00.439480  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:00.939557  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:01.439633  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:01.938752  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:02.438782  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:02.939460  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:03.439509  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:03.939666  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:04.438756  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:04.938745  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:05.438791  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:05.938960  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:06.439641  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:06.939760  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:07.438893  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:07.939464  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:08.438808  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:08.938829  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:09.438860  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:09.939094  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:10.439786  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:10.939776  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:11.439694  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:37:11.439774  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:37:11.465379  947325 cri.go:89] found id: ""
	I1213 10:37:11.465394  947325 logs.go:282] 0 containers: []
	W1213 10:37:11.465401  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:37:11.465406  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:37:11.465463  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:37:11.498722  947325 cri.go:89] found id: ""
	I1213 10:37:11.498736  947325 logs.go:282] 0 containers: []
	W1213 10:37:11.498744  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:37:11.498749  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:37:11.498808  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:37:11.528435  947325 cri.go:89] found id: ""
	I1213 10:37:11.528450  947325 logs.go:282] 0 containers: []
	W1213 10:37:11.528456  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:37:11.528461  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:37:11.528520  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:37:11.556412  947325 cri.go:89] found id: ""
	I1213 10:37:11.556428  947325 logs.go:282] 0 containers: []
	W1213 10:37:11.556435  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:37:11.556439  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:37:11.556495  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:37:11.582029  947325 cri.go:89] found id: ""
	I1213 10:37:11.582043  947325 logs.go:282] 0 containers: []
	W1213 10:37:11.582050  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:37:11.582055  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:37:11.582111  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:37:11.606900  947325 cri.go:89] found id: ""
	I1213 10:37:11.606914  947325 logs.go:282] 0 containers: []
	W1213 10:37:11.606921  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:37:11.606926  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:37:11.606995  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:37:11.631834  947325 cri.go:89] found id: ""
	I1213 10:37:11.631848  947325 logs.go:282] 0 containers: []
	W1213 10:37:11.631855  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:37:11.631863  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:37:11.631873  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:37:11.696990  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:37:11.697011  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:37:11.711905  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:37:11.711923  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:37:11.780498  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:37:11.772620   10987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:11.773404   10987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:11.774929   10987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:11.775464   10987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:11.776559   10987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:37:11.772620   10987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:11.773404   10987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:11.774929   10987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:11.775464   10987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:11.776559   10987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:37:11.780514  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:37:11.780525  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:37:11.849149  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:37:11.849169  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:37:14.380275  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:14.390300  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:37:14.390376  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:37:14.422358  947325 cri.go:89] found id: ""
	I1213 10:37:14.422408  947325 logs.go:282] 0 containers: []
	W1213 10:37:14.422434  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:37:14.422439  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:37:14.422577  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:37:14.449364  947325 cri.go:89] found id: ""
	I1213 10:37:14.449379  947325 logs.go:282] 0 containers: []
	W1213 10:37:14.449386  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:37:14.449391  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:37:14.449448  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:37:14.478529  947325 cri.go:89] found id: ""
	I1213 10:37:14.478543  947325 logs.go:282] 0 containers: []
	W1213 10:37:14.478550  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:37:14.478555  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:37:14.478612  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:37:14.521652  947325 cri.go:89] found id: ""
	I1213 10:37:14.521666  947325 logs.go:282] 0 containers: []
	W1213 10:37:14.521673  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:37:14.521678  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:37:14.521736  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:37:14.558521  947325 cri.go:89] found id: ""
	I1213 10:37:14.558535  947325 logs.go:282] 0 containers: []
	W1213 10:37:14.558542  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:37:14.558547  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:37:14.558605  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:37:14.582435  947325 cri.go:89] found id: ""
	I1213 10:37:14.582448  947325 logs.go:282] 0 containers: []
	W1213 10:37:14.582455  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:37:14.582461  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:37:14.582518  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:37:14.607776  947325 cri.go:89] found id: ""
	I1213 10:37:14.607791  947325 logs.go:282] 0 containers: []
	W1213 10:37:14.607799  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:37:14.607807  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:37:14.607816  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:37:14.673008  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:37:14.673028  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:37:14.688569  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:37:14.688585  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:37:14.753510  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:37:14.744939   11090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:14.745653   11090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:14.747326   11090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:14.747936   11090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:14.749524   11090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:37:14.744939   11090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:14.745653   11090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:14.747326   11090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:14.747936   11090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:14.749524   11090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:37:14.753524  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:37:14.753556  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:37:14.820848  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:37:14.820868  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:37:17.353563  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:17.363824  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:37:17.363887  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:37:17.393249  947325 cri.go:89] found id: ""
	I1213 10:37:17.393263  947325 logs.go:282] 0 containers: []
	W1213 10:37:17.393271  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:37:17.393275  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:37:17.393334  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:37:17.421143  947325 cri.go:89] found id: ""
	I1213 10:37:17.421157  947325 logs.go:282] 0 containers: []
	W1213 10:37:17.421164  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:37:17.421169  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:37:17.421226  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:37:17.445347  947325 cri.go:89] found id: ""
	I1213 10:37:17.445361  947325 logs.go:282] 0 containers: []
	W1213 10:37:17.445368  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:37:17.445372  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:37:17.445428  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:37:17.474380  947325 cri.go:89] found id: ""
	I1213 10:37:17.474406  947325 logs.go:282] 0 containers: []
	W1213 10:37:17.474413  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:37:17.474419  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:37:17.474502  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:37:17.510146  947325 cri.go:89] found id: ""
	I1213 10:37:17.510160  947325 logs.go:282] 0 containers: []
	W1213 10:37:17.510167  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:37:17.510172  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:37:17.510228  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:37:17.544872  947325 cri.go:89] found id: ""
	I1213 10:37:17.544897  947325 logs.go:282] 0 containers: []
	W1213 10:37:17.544911  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:37:17.544917  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:37:17.544987  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:37:17.570527  947325 cri.go:89] found id: ""
	I1213 10:37:17.570542  947325 logs.go:282] 0 containers: []
	W1213 10:37:17.570549  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:37:17.570556  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:37:17.570567  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:37:17.634904  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:37:17.634924  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:37:17.649198  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:37:17.649216  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:37:17.710891  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:37:17.702777   11196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:17.703254   11196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:17.704864   11196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:17.705195   11196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:17.706621   11196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:37:17.702777   11196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:17.703254   11196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:17.704864   11196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:17.705195   11196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:17.706621   11196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:37:17.710910  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:37:17.710921  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:37:17.779540  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:37:17.779561  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:37:20.315323  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:20.326110  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:37:20.326185  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:37:20.351285  947325 cri.go:89] found id: ""
	I1213 10:37:20.351299  947325 logs.go:282] 0 containers: []
	W1213 10:37:20.351307  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:37:20.351312  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:37:20.351381  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:37:20.377322  947325 cri.go:89] found id: ""
	I1213 10:37:20.377335  947325 logs.go:282] 0 containers: []
	W1213 10:37:20.377343  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:37:20.377352  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:37:20.377413  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:37:20.403676  947325 cri.go:89] found id: ""
	I1213 10:37:20.403691  947325 logs.go:282] 0 containers: []
	W1213 10:37:20.403698  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:37:20.403704  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:37:20.403766  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:37:20.433713  947325 cri.go:89] found id: ""
	I1213 10:37:20.433736  947325 logs.go:282] 0 containers: []
	W1213 10:37:20.433744  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:37:20.433749  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:37:20.433809  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:37:20.459243  947325 cri.go:89] found id: ""
	I1213 10:37:20.459258  947325 logs.go:282] 0 containers: []
	W1213 10:37:20.459265  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:37:20.459270  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:37:20.459328  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:37:20.505295  947325 cri.go:89] found id: ""
	I1213 10:37:20.505310  947325 logs.go:282] 0 containers: []
	W1213 10:37:20.505317  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:37:20.505322  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:37:20.505382  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:37:20.534487  947325 cri.go:89] found id: ""
	I1213 10:37:20.534502  947325 logs.go:282] 0 containers: []
	W1213 10:37:20.534510  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:37:20.534518  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:37:20.534529  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:37:20.562816  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:37:20.562833  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:37:20.626774  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:37:20.626798  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:37:20.642510  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:37:20.642526  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:37:20.716150  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:37:20.707015   11315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:20.707754   11315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:20.709473   11315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:20.710077   11315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:20.711566   11315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:37:20.707015   11315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:20.707754   11315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:20.709473   11315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:20.710077   11315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:20.711566   11315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:37:20.716164  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:37:20.716176  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:37:23.288286  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:23.298705  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:37:23.298766  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:37:23.333024  947325 cri.go:89] found id: ""
	I1213 10:37:23.333038  947325 logs.go:282] 0 containers: []
	W1213 10:37:23.333046  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:37:23.333051  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:37:23.333115  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:37:23.358903  947325 cri.go:89] found id: ""
	I1213 10:37:23.358916  947325 logs.go:282] 0 containers: []
	W1213 10:37:23.358924  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:37:23.358929  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:37:23.358989  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:37:23.384787  947325 cri.go:89] found id: ""
	I1213 10:37:23.384801  947325 logs.go:282] 0 containers: []
	W1213 10:37:23.384808  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:37:23.384812  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:37:23.384871  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:37:23.410002  947325 cri.go:89] found id: ""
	I1213 10:37:23.410036  947325 logs.go:282] 0 containers: []
	W1213 10:37:23.410061  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:37:23.410086  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:37:23.410150  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:37:23.434837  947325 cri.go:89] found id: ""
	I1213 10:37:23.434865  947325 logs.go:282] 0 containers: []
	W1213 10:37:23.434872  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:37:23.434878  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:37:23.434945  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:37:23.464375  947325 cri.go:89] found id: ""
	I1213 10:37:23.464389  947325 logs.go:282] 0 containers: []
	W1213 10:37:23.464396  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:37:23.464402  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:37:23.464472  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:37:23.506074  947325 cri.go:89] found id: ""
	I1213 10:37:23.506089  947325 logs.go:282] 0 containers: []
	W1213 10:37:23.506097  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:37:23.506104  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:37:23.506116  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:37:23.589169  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:37:23.589191  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:37:23.619461  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:37:23.619477  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:37:23.688698  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:37:23.688720  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:37:23.703620  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:37:23.703637  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:37:23.771897  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:37:23.763311   11423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:23.763984   11423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:23.765659   11423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:23.766138   11423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:23.767919   11423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:37:23.763311   11423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:23.763984   11423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:23.765659   11423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:23.766138   11423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:23.767919   11423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:37:26.272169  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:26.282101  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:37:26.282172  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:37:26.308055  947325 cri.go:89] found id: ""
	I1213 10:37:26.308071  947325 logs.go:282] 0 containers: []
	W1213 10:37:26.308078  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:37:26.308086  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:37:26.308147  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:37:26.334700  947325 cri.go:89] found id: ""
	I1213 10:37:26.334722  947325 logs.go:282] 0 containers: []
	W1213 10:37:26.334729  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:37:26.334735  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:37:26.334799  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:37:26.360726  947325 cri.go:89] found id: ""
	I1213 10:37:26.360749  947325 logs.go:282] 0 containers: []
	W1213 10:37:26.360758  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:37:26.360763  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:37:26.360830  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:37:26.385135  947325 cri.go:89] found id: ""
	I1213 10:37:26.385149  947325 logs.go:282] 0 containers: []
	W1213 10:37:26.385157  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:37:26.385162  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:37:26.385233  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:37:26.412837  947325 cri.go:89] found id: ""
	I1213 10:37:26.412851  947325 logs.go:282] 0 containers: []
	W1213 10:37:26.412858  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:37:26.412863  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:37:26.412942  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:37:26.437812  947325 cri.go:89] found id: ""
	I1213 10:37:26.437827  947325 logs.go:282] 0 containers: []
	W1213 10:37:26.437834  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:37:26.437839  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:37:26.437900  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:37:26.463570  947325 cri.go:89] found id: ""
	I1213 10:37:26.463584  947325 logs.go:282] 0 containers: []
	W1213 10:37:26.463592  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:37:26.463600  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:37:26.463611  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:37:26.534802  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:37:26.534823  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:37:26.550643  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:37:26.550658  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:37:26.612829  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:37:26.605210   11516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:26.605795   11516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:26.606999   11516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:26.607456   11516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:26.609002   11516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:37:26.605210   11516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:26.605795   11516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:26.606999   11516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:26.607456   11516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:26.609002   11516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:37:26.612839  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:37:26.612849  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:37:26.681461  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:37:26.681480  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:37:29.210709  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:29.221193  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:37:29.221255  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:37:29.249274  947325 cri.go:89] found id: ""
	I1213 10:37:29.249289  947325 logs.go:282] 0 containers: []
	W1213 10:37:29.249297  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:37:29.249301  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:37:29.249369  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:37:29.276685  947325 cri.go:89] found id: ""
	I1213 10:37:29.276709  947325 logs.go:282] 0 containers: []
	W1213 10:37:29.276718  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:37:29.276723  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:37:29.276788  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:37:29.303267  947325 cri.go:89] found id: ""
	I1213 10:37:29.303281  947325 logs.go:282] 0 containers: []
	W1213 10:37:29.303289  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:37:29.303294  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:37:29.303355  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:37:29.328158  947325 cri.go:89] found id: ""
	I1213 10:37:29.328173  947325 logs.go:282] 0 containers: []
	W1213 10:37:29.328180  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:37:29.328186  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:37:29.328244  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:37:29.355541  947325 cri.go:89] found id: ""
	I1213 10:37:29.355556  947325 logs.go:282] 0 containers: []
	W1213 10:37:29.355565  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:37:29.355570  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:37:29.355627  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:37:29.381411  947325 cri.go:89] found id: ""
	I1213 10:37:29.381426  947325 logs.go:282] 0 containers: []
	W1213 10:37:29.381433  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:37:29.381439  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:37:29.381501  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:37:29.407073  947325 cri.go:89] found id: ""
	I1213 10:37:29.407088  947325 logs.go:282] 0 containers: []
	W1213 10:37:29.407094  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:37:29.407101  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:37:29.407113  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:37:29.422330  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:37:29.422347  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:37:29.498825  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:37:29.490027   11614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:29.491071   11614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:29.492766   11614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:29.493102   11614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:29.494590   11614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:37:29.490027   11614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:29.491071   11614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:29.492766   11614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:29.493102   11614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:29.494590   11614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:37:29.498837  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:37:29.498850  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:37:29.575835  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:37:29.575856  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:37:29.607770  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:37:29.607790  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:37:32.181248  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:32.191812  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:37:32.191876  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:37:32.217211  947325 cri.go:89] found id: ""
	I1213 10:37:32.217225  947325 logs.go:282] 0 containers: []
	W1213 10:37:32.217233  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:37:32.217238  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:37:32.217293  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:37:32.243073  947325 cri.go:89] found id: ""
	I1213 10:37:32.243087  947325 logs.go:282] 0 containers: []
	W1213 10:37:32.243095  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:37:32.243100  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:37:32.243172  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:37:32.272999  947325 cri.go:89] found id: ""
	I1213 10:37:32.273013  947325 logs.go:282] 0 containers: []
	W1213 10:37:32.273020  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:37:32.273025  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:37:32.273084  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:37:32.299079  947325 cri.go:89] found id: ""
	I1213 10:37:32.299092  947325 logs.go:282] 0 containers: []
	W1213 10:37:32.299099  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:37:32.299104  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:37:32.299161  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:37:32.328707  947325 cri.go:89] found id: ""
	I1213 10:37:32.328722  947325 logs.go:282] 0 containers: []
	W1213 10:37:32.328729  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:37:32.328734  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:37:32.328795  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:37:32.354361  947325 cri.go:89] found id: ""
	I1213 10:37:32.354375  947325 logs.go:282] 0 containers: []
	W1213 10:37:32.354382  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:37:32.354388  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:37:32.354448  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:37:32.380069  947325 cri.go:89] found id: ""
	I1213 10:37:32.380083  947325 logs.go:282] 0 containers: []
	W1213 10:37:32.380089  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:37:32.380096  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:37:32.380107  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:37:32.445012  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:37:32.445036  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:37:32.460199  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:37:32.460223  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:37:32.549445  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:37:32.540188   11723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:32.540702   11723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:32.542738   11723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:32.543594   11723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:32.545344   11723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:37:32.540188   11723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:32.540702   11723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:32.542738   11723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:32.543594   11723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:32.545344   11723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:37:32.549456  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:37:32.549467  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:37:32.617595  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:37:32.617617  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:37:35.148911  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:35.159421  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:37:35.159482  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:37:35.186963  947325 cri.go:89] found id: ""
	I1213 10:37:35.186976  947325 logs.go:282] 0 containers: []
	W1213 10:37:35.186984  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:37:35.186989  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:37:35.187046  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:37:35.216128  947325 cri.go:89] found id: ""
	I1213 10:37:35.216142  947325 logs.go:282] 0 containers: []
	W1213 10:37:35.216153  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:37:35.216158  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:37:35.216217  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:37:35.244930  947325 cri.go:89] found id: ""
	I1213 10:37:35.244945  947325 logs.go:282] 0 containers: []
	W1213 10:37:35.244953  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:37:35.244958  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:37:35.245020  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:37:35.270186  947325 cri.go:89] found id: ""
	I1213 10:37:35.270200  947325 logs.go:282] 0 containers: []
	W1213 10:37:35.270207  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:37:35.270212  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:37:35.270268  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:37:35.296166  947325 cri.go:89] found id: ""
	I1213 10:37:35.296180  947325 logs.go:282] 0 containers: []
	W1213 10:37:35.296187  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:37:35.296192  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:37:35.296249  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:37:35.325322  947325 cri.go:89] found id: ""
	I1213 10:37:35.325337  947325 logs.go:282] 0 containers: []
	W1213 10:37:35.325344  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:37:35.325349  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:37:35.325411  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:37:35.350870  947325 cri.go:89] found id: ""
	I1213 10:37:35.350884  947325 logs.go:282] 0 containers: []
	W1213 10:37:35.350892  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:37:35.350900  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:37:35.350911  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:37:35.365840  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:37:35.365857  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:37:35.428973  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:37:35.420649   11826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:35.421481   11826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:35.422989   11826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:35.423556   11826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:35.425084   11826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:37:35.420649   11826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:35.421481   11826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:35.422989   11826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:35.423556   11826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:35.425084   11826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:37:35.428993  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:37:35.429004  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:37:35.497503  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:37:35.497522  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:37:35.530732  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:37:35.530751  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:37:38.099975  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:38.110243  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:37:38.110306  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:37:38.140777  947325 cri.go:89] found id: ""
	I1213 10:37:38.140792  947325 logs.go:282] 0 containers: []
	W1213 10:37:38.140798  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:37:38.140804  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:37:38.140871  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:37:38.167186  947325 cri.go:89] found id: ""
	I1213 10:37:38.167200  947325 logs.go:282] 0 containers: []
	W1213 10:37:38.167207  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:37:38.167212  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:37:38.167276  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:37:38.193305  947325 cri.go:89] found id: ""
	I1213 10:37:38.193318  947325 logs.go:282] 0 containers: []
	W1213 10:37:38.193326  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:37:38.193331  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:37:38.193388  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:37:38.219451  947325 cri.go:89] found id: ""
	I1213 10:37:38.219464  947325 logs.go:282] 0 containers: []
	W1213 10:37:38.219472  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:37:38.219477  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:37:38.219542  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:37:38.249284  947325 cri.go:89] found id: ""
	I1213 10:37:38.249299  947325 logs.go:282] 0 containers: []
	W1213 10:37:38.249306  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:37:38.249311  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:37:38.249380  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:37:38.275451  947325 cri.go:89] found id: ""
	I1213 10:37:38.275464  947325 logs.go:282] 0 containers: []
	W1213 10:37:38.275471  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:37:38.275477  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:37:38.275538  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:37:38.300476  947325 cri.go:89] found id: ""
	I1213 10:37:38.300490  947325 logs.go:282] 0 containers: []
	W1213 10:37:38.300497  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:37:38.300504  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:37:38.300517  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:37:38.366681  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:37:38.366700  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:37:38.381405  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:37:38.381423  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:37:38.441215  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:37:38.434082   11932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:38.434551   11932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:38.435674   11932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:38.436023   11932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:38.437450   11932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:37:38.434082   11932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:38.434551   11932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:38.435674   11932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:38.436023   11932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:38.437450   11932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:37:38.441225  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:37:38.441236  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:37:38.508504  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:37:38.508525  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:37:41.051455  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:41.061451  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:37:41.061522  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:37:41.087312  947325 cri.go:89] found id: ""
	I1213 10:37:41.087331  947325 logs.go:282] 0 containers: []
	W1213 10:37:41.087338  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:37:41.087343  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:37:41.087416  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:37:41.116231  947325 cri.go:89] found id: ""
	I1213 10:37:41.116246  947325 logs.go:282] 0 containers: []
	W1213 10:37:41.116253  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:37:41.116258  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:37:41.116316  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:37:41.147429  947325 cri.go:89] found id: ""
	I1213 10:37:41.147444  947325 logs.go:282] 0 containers: []
	W1213 10:37:41.147451  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:37:41.147457  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:37:41.147516  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:37:41.176551  947325 cri.go:89] found id: ""
	I1213 10:37:41.176565  947325 logs.go:282] 0 containers: []
	W1213 10:37:41.176573  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:37:41.176578  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:37:41.176634  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:37:41.204132  947325 cri.go:89] found id: ""
	I1213 10:37:41.204146  947325 logs.go:282] 0 containers: []
	W1213 10:37:41.204154  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:37:41.204159  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:37:41.204223  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:37:41.230785  947325 cri.go:89] found id: ""
	I1213 10:37:41.230799  947325 logs.go:282] 0 containers: []
	W1213 10:37:41.230807  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:37:41.230813  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:37:41.230880  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:37:41.256411  947325 cri.go:89] found id: ""
	I1213 10:37:41.256425  947325 logs.go:282] 0 containers: []
	W1213 10:37:41.256433  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:37:41.256440  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:37:41.256451  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:37:41.285617  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:37:41.285636  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:37:41.356895  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:37:41.356914  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:37:41.371698  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:37:41.371714  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:37:41.436289  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:37:41.427612   12049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:41.428221   12049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:41.430007   12049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:41.430584   12049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:41.432351   12049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:37:41.427612   12049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:41.428221   12049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:41.430007   12049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:41.430584   12049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:41.432351   12049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:37:41.436299  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:37:41.436309  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:37:44.006670  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:44.021718  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:37:44.021788  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:37:44.048534  947325 cri.go:89] found id: ""
	I1213 10:37:44.048549  947325 logs.go:282] 0 containers: []
	W1213 10:37:44.048565  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:37:44.048571  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:37:44.048674  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:37:44.079425  947325 cri.go:89] found id: ""
	I1213 10:37:44.079439  947325 logs.go:282] 0 containers: []
	W1213 10:37:44.079446  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:37:44.079451  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:37:44.079523  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:37:44.106317  947325 cri.go:89] found id: ""
	I1213 10:37:44.106334  947325 logs.go:282] 0 containers: []
	W1213 10:37:44.106342  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:37:44.106348  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:37:44.106420  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:37:44.132520  947325 cri.go:89] found id: ""
	I1213 10:37:44.132534  947325 logs.go:282] 0 containers: []
	W1213 10:37:44.132553  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:37:44.132558  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:37:44.132628  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:37:44.161205  947325 cri.go:89] found id: ""
	I1213 10:37:44.161219  947325 logs.go:282] 0 containers: []
	W1213 10:37:44.161226  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:37:44.161231  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:37:44.161291  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:37:44.187876  947325 cri.go:89] found id: ""
	I1213 10:37:44.187890  947325 logs.go:282] 0 containers: []
	W1213 10:37:44.187898  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:37:44.187903  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:37:44.187961  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:37:44.215854  947325 cri.go:89] found id: ""
	I1213 10:37:44.215869  947325 logs.go:282] 0 containers: []
	W1213 10:37:44.215876  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:37:44.215884  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:37:44.215894  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:37:44.284854  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:37:44.276025   12136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:44.276798   12136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:44.278330   12136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:44.278909   12136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:44.280546   12136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:37:44.276025   12136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:44.276798   12136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:44.278330   12136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:44.278909   12136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:44.280546   12136 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:37:44.284866  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:37:44.284876  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:37:44.355349  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:37:44.355373  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:37:44.384733  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:37:44.384752  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:37:44.453769  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:37:44.453788  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:37:46.969736  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:46.979972  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:37:46.980038  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:37:47.007061  947325 cri.go:89] found id: ""
	I1213 10:37:47.007075  947325 logs.go:282] 0 containers: []
	W1213 10:37:47.007082  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:37:47.007087  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:37:47.007146  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:37:47.036818  947325 cri.go:89] found id: ""
	I1213 10:37:47.036832  947325 logs.go:282] 0 containers: []
	W1213 10:37:47.036858  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:37:47.036863  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:37:47.036921  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:37:47.061328  947325 cri.go:89] found id: ""
	I1213 10:37:47.061342  947325 logs.go:282] 0 containers: []
	W1213 10:37:47.061349  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:37:47.061355  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:37:47.061415  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:37:47.089017  947325 cri.go:89] found id: ""
	I1213 10:37:47.089032  947325 logs.go:282] 0 containers: []
	W1213 10:37:47.089039  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:37:47.089044  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:37:47.089103  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:37:47.114790  947325 cri.go:89] found id: ""
	I1213 10:37:47.114803  947325 logs.go:282] 0 containers: []
	W1213 10:37:47.114810  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:37:47.114817  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:37:47.114877  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:37:47.139554  947325 cri.go:89] found id: ""
	I1213 10:37:47.139575  947325 logs.go:282] 0 containers: []
	W1213 10:37:47.139583  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:37:47.139589  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:37:47.139654  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:37:47.165228  947325 cri.go:89] found id: ""
	I1213 10:37:47.165241  947325 logs.go:282] 0 containers: []
	W1213 10:37:47.165248  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:37:47.165256  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:37:47.165266  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:37:47.232293  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:37:47.232313  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:37:47.261718  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:37:47.261736  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:37:47.331592  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:37:47.331613  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:37:47.345881  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:37:47.345897  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:37:47.412948  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:37:47.404477   12259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:47.405216   12259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:47.406839   12259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:47.407332   12259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:47.409008   12259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:37:47.404477   12259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:47.405216   12259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:47.406839   12259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:47.407332   12259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:47.409008   12259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:37:49.913659  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:49.923942  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:37:49.924005  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:37:49.951850  947325 cri.go:89] found id: ""
	I1213 10:37:49.951863  947325 logs.go:282] 0 containers: []
	W1213 10:37:49.951871  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:37:49.951876  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:37:49.951936  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:37:49.976949  947325 cri.go:89] found id: ""
	I1213 10:37:49.976963  947325 logs.go:282] 0 containers: []
	W1213 10:37:49.976971  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:37:49.976976  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:37:49.977034  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:37:50.020670  947325 cri.go:89] found id: ""
	I1213 10:37:50.020686  947325 logs.go:282] 0 containers: []
	W1213 10:37:50.020693  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:37:50.020698  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:37:50.020779  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:37:50.048299  947325 cri.go:89] found id: ""
	I1213 10:37:50.048316  947325 logs.go:282] 0 containers: []
	W1213 10:37:50.048323  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:37:50.048328  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:37:50.048397  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:37:50.075060  947325 cri.go:89] found id: ""
	I1213 10:37:50.075074  947325 logs.go:282] 0 containers: []
	W1213 10:37:50.075081  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:37:50.075087  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:37:50.075148  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:37:50.104579  947325 cri.go:89] found id: ""
	I1213 10:37:50.104593  947325 logs.go:282] 0 containers: []
	W1213 10:37:50.104601  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:37:50.104607  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:37:50.104666  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:37:50.132679  947325 cri.go:89] found id: ""
	I1213 10:37:50.132693  947325 logs.go:282] 0 containers: []
	W1213 10:37:50.132701  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:37:50.132714  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:37:50.132725  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:37:50.197209  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:37:50.187857   12346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:50.188686   12346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:50.190498   12346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:50.191212   12346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:50.192792   12346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:37:50.187857   12346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:50.188686   12346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:50.190498   12346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:50.191212   12346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:50.192792   12346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:37:50.197219  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:37:50.197230  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:37:50.267157  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:37:50.267176  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:37:50.297061  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:37:50.297077  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:37:50.363929  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:37:50.363950  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:37:52.879245  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:52.889673  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:37:52.889741  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:37:52.914746  947325 cri.go:89] found id: ""
	I1213 10:37:52.914768  947325 logs.go:282] 0 containers: []
	W1213 10:37:52.914776  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:37:52.914781  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:37:52.914845  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:37:52.941523  947325 cri.go:89] found id: ""
	I1213 10:37:52.941554  947325 logs.go:282] 0 containers: []
	W1213 10:37:52.941562  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:37:52.941567  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:37:52.941623  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:37:52.967010  947325 cri.go:89] found id: ""
	I1213 10:37:52.967027  947325 logs.go:282] 0 containers: []
	W1213 10:37:52.967035  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:37:52.967040  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:37:52.967141  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:37:52.992300  947325 cri.go:89] found id: ""
	I1213 10:37:52.992313  947325 logs.go:282] 0 containers: []
	W1213 10:37:52.992321  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:37:52.992326  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:37:52.992386  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:37:53.020045  947325 cri.go:89] found id: ""
	I1213 10:37:53.020058  947325 logs.go:282] 0 containers: []
	W1213 10:37:53.020074  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:37:53.020081  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:37:53.020140  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:37:53.053897  947325 cri.go:89] found id: ""
	I1213 10:37:53.053911  947325 logs.go:282] 0 containers: []
	W1213 10:37:53.053918  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:37:53.053923  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:37:53.053982  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:37:53.079867  947325 cri.go:89] found id: ""
	I1213 10:37:53.079882  947325 logs.go:282] 0 containers: []
	W1213 10:37:53.079890  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:37:53.079897  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:37:53.079908  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:37:53.144913  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:37:53.144932  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:37:53.159844  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:37:53.159861  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:37:53.226427  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:37:53.218433   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:53.219033   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:53.220531   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:53.221108   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:53.222548   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:37:53.218433   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:53.219033   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:53.220531   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:53.221108   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:53.222548   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:37:53.226436  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:37:53.226447  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:37:53.294490  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:37:53.294510  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:37:55.827710  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:55.837950  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:37:55.838028  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:37:55.863239  947325 cri.go:89] found id: ""
	I1213 10:37:55.863253  947325 logs.go:282] 0 containers: []
	W1213 10:37:55.863260  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:37:55.863265  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:37:55.863331  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:37:55.892876  947325 cri.go:89] found id: ""
	I1213 10:37:55.892890  947325 logs.go:282] 0 containers: []
	W1213 10:37:55.892897  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:37:55.892902  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:37:55.892962  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:37:55.919038  947325 cri.go:89] found id: ""
	I1213 10:37:55.919051  947325 logs.go:282] 0 containers: []
	W1213 10:37:55.919059  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:37:55.919064  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:37:55.919123  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:37:55.944982  947325 cri.go:89] found id: ""
	I1213 10:37:55.944997  947325 logs.go:282] 0 containers: []
	W1213 10:37:55.945004  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:37:55.945009  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:37:55.945066  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:37:55.974750  947325 cri.go:89] found id: ""
	I1213 10:37:55.974764  947325 logs.go:282] 0 containers: []
	W1213 10:37:55.974771  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:37:55.974776  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:37:55.974836  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:37:56.006337  947325 cri.go:89] found id: ""
	I1213 10:37:56.006352  947325 logs.go:282] 0 containers: []
	W1213 10:37:56.006360  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:37:56.006365  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:37:56.006429  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:37:56.033183  947325 cri.go:89] found id: ""
	I1213 10:37:56.033199  947325 logs.go:282] 0 containers: []
	W1213 10:37:56.033206  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:37:56.033214  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:37:56.033225  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:37:56.098781  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:37:56.098801  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:37:56.113910  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:37:56.113933  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:37:56.179999  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:37:56.172125   12565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:56.172668   12565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:56.174227   12565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:56.174819   12565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:56.176271   12565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:37:56.172125   12565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:56.172668   12565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:56.174227   12565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:56.174819   12565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:56.176271   12565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:37:56.180009  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:37:56.180020  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:37:56.248249  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:37:56.248271  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:37:58.777669  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:37:58.788383  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:37:58.788443  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:37:58.815846  947325 cri.go:89] found id: ""
	I1213 10:37:58.815861  947325 logs.go:282] 0 containers: []
	W1213 10:37:58.815868  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:37:58.815873  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:37:58.815933  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:37:58.845912  947325 cri.go:89] found id: ""
	I1213 10:37:58.845926  947325 logs.go:282] 0 containers: []
	W1213 10:37:58.845933  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:37:58.845938  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:37:58.846003  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:37:58.870933  947325 cri.go:89] found id: ""
	I1213 10:37:58.870947  947325 logs.go:282] 0 containers: []
	W1213 10:37:58.870954  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:37:58.870959  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:37:58.871017  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:37:58.900972  947325 cri.go:89] found id: ""
	I1213 10:37:58.900986  947325 logs.go:282] 0 containers: []
	W1213 10:37:58.900993  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:37:58.900998  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:37:58.901054  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:37:58.926234  947325 cri.go:89] found id: ""
	I1213 10:37:58.926257  947325 logs.go:282] 0 containers: []
	W1213 10:37:58.926266  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:37:58.926271  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:37:58.926338  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:37:58.951314  947325 cri.go:89] found id: ""
	I1213 10:37:58.951328  947325 logs.go:282] 0 containers: []
	W1213 10:37:58.951335  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:37:58.951340  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:37:58.951398  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:37:58.981974  947325 cri.go:89] found id: ""
	I1213 10:37:58.981989  947325 logs.go:282] 0 containers: []
	W1213 10:37:58.981996  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:37:58.982003  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:37:58.982014  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:37:59.047152  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:37:59.047172  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:37:59.062001  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:37:59.062019  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:37:59.127736  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:37:59.119615   12667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:59.120166   12667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:59.121736   12667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:59.122383   12667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:59.123935   12667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:37:59.119615   12667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:59.120166   12667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:59.121736   12667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:59.122383   12667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:37:59.123935   12667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:37:59.127748  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:37:59.127759  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:37:59.196288  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:37:59.196308  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:38:01.726269  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:01.738227  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:38:01.738290  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:38:01.765402  947325 cri.go:89] found id: ""
	I1213 10:38:01.765416  947325 logs.go:282] 0 containers: []
	W1213 10:38:01.765423  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:38:01.765428  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:38:01.765487  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:38:01.797073  947325 cri.go:89] found id: ""
	I1213 10:38:01.797087  947325 logs.go:282] 0 containers: []
	W1213 10:38:01.797094  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:38:01.797105  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:38:01.797165  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:38:01.822923  947325 cri.go:89] found id: ""
	I1213 10:38:01.822936  947325 logs.go:282] 0 containers: []
	W1213 10:38:01.822943  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:38:01.822948  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:38:01.823004  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:38:01.847458  947325 cri.go:89] found id: ""
	I1213 10:38:01.847472  947325 logs.go:282] 0 containers: []
	W1213 10:38:01.847479  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:38:01.847484  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:38:01.847542  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:38:01.876363  947325 cri.go:89] found id: ""
	I1213 10:38:01.876376  947325 logs.go:282] 0 containers: []
	W1213 10:38:01.876383  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:38:01.876388  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:38:01.876445  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:38:01.901894  947325 cri.go:89] found id: ""
	I1213 10:38:01.901908  947325 logs.go:282] 0 containers: []
	W1213 10:38:01.901915  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:38:01.901920  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:38:01.901977  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:38:01.927538  947325 cri.go:89] found id: ""
	I1213 10:38:01.927556  947325 logs.go:282] 0 containers: []
	W1213 10:38:01.927563  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:38:01.927571  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:38:01.927585  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:38:01.993043  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:38:01.993063  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:38:02.009861  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:38:02.009878  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:38:02.079070  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:38:02.070348   12772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:02.071182   12772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:02.072918   12772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:02.073701   12772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:02.074834   12772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:38:02.070348   12772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:02.071182   12772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:02.072918   12772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:02.073701   12772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:02.074834   12772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:38:02.079087  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:38:02.079097  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:38:02.150335  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:38:02.150355  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:38:04.680156  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:04.690471  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:38:04.690534  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:38:04.717027  947325 cri.go:89] found id: ""
	I1213 10:38:04.717042  947325 logs.go:282] 0 containers: []
	W1213 10:38:04.717049  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:38:04.717055  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:38:04.717116  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:38:04.751100  947325 cri.go:89] found id: ""
	I1213 10:38:04.751114  947325 logs.go:282] 0 containers: []
	W1213 10:38:04.751121  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:38:04.751126  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:38:04.751185  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:38:04.785118  947325 cri.go:89] found id: ""
	I1213 10:38:04.785133  947325 logs.go:282] 0 containers: []
	W1213 10:38:04.785140  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:38:04.785145  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:38:04.785206  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:38:04.811838  947325 cri.go:89] found id: ""
	I1213 10:38:04.811852  947325 logs.go:282] 0 containers: []
	W1213 10:38:04.811859  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:38:04.811864  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:38:04.811924  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:38:04.837476  947325 cri.go:89] found id: ""
	I1213 10:38:04.837489  947325 logs.go:282] 0 containers: []
	W1213 10:38:04.837497  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:38:04.837502  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:38:04.837589  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:38:04.863616  947325 cri.go:89] found id: ""
	I1213 10:38:04.863630  947325 logs.go:282] 0 containers: []
	W1213 10:38:04.863637  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:38:04.863642  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:38:04.864028  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:38:04.897282  947325 cri.go:89] found id: ""
	I1213 10:38:04.897297  947325 logs.go:282] 0 containers: []
	W1213 10:38:04.897304  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:38:04.897311  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:38:04.897322  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:38:04.970089  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:38:04.970112  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:38:04.998787  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:38:04.998808  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:38:05.071114  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:38:05.071136  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:38:05.086764  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:38:05.086780  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:38:05.152705  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:38:05.144665   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:05.145255   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:05.146845   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:05.147330   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:05.148849   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:38:05.144665   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:05.145255   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:05.146845   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:05.147330   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:05.148849   12890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:38:07.652961  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:07.663190  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:38:07.663256  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:38:07.687596  947325 cri.go:89] found id: ""
	I1213 10:38:07.687611  947325 logs.go:282] 0 containers: []
	W1213 10:38:07.687619  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:38:07.687624  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:38:07.687682  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:38:07.712358  947325 cri.go:89] found id: ""
	I1213 10:38:07.712372  947325 logs.go:282] 0 containers: []
	W1213 10:38:07.712379  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:38:07.712384  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:38:07.712443  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:38:07.747606  947325 cri.go:89] found id: ""
	I1213 10:38:07.747620  947325 logs.go:282] 0 containers: []
	W1213 10:38:07.747627  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:38:07.747632  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:38:07.747686  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:38:07.779928  947325 cri.go:89] found id: ""
	I1213 10:38:07.779942  947325 logs.go:282] 0 containers: []
	W1213 10:38:07.779949  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:38:07.779954  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:38:07.780010  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:38:07.809892  947325 cri.go:89] found id: ""
	I1213 10:38:07.809905  947325 logs.go:282] 0 containers: []
	W1213 10:38:07.809912  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:38:07.809917  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:38:07.809976  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:38:07.835954  947325 cri.go:89] found id: ""
	I1213 10:38:07.835969  947325 logs.go:282] 0 containers: []
	W1213 10:38:07.835977  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:38:07.835983  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:38:07.836045  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:38:07.863613  947325 cri.go:89] found id: ""
	I1213 10:38:07.863628  947325 logs.go:282] 0 containers: []
	W1213 10:38:07.863635  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:38:07.863643  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:38:07.863653  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:38:07.934015  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:38:07.934035  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:38:07.949065  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:38:07.949082  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:38:08.016099  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:38:08.006616   12984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:08.007565   12984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:08.009216   12984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:08.009606   12984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:08.011135   12984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:38:08.006616   12984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:08.007565   12984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:08.009216   12984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:08.009606   12984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:08.011135   12984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:38:08.016110  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:38:08.016120  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:38:08.086624  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:38:08.086643  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:38:10.620779  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:10.631455  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:38:10.631519  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:38:10.657004  947325 cri.go:89] found id: ""
	I1213 10:38:10.657018  947325 logs.go:282] 0 containers: []
	W1213 10:38:10.657025  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:38:10.657031  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:38:10.657091  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:38:10.682863  947325 cri.go:89] found id: ""
	I1213 10:38:10.682879  947325 logs.go:282] 0 containers: []
	W1213 10:38:10.682887  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:38:10.682892  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:38:10.682952  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:38:10.710656  947325 cri.go:89] found id: ""
	I1213 10:38:10.710671  947325 logs.go:282] 0 containers: []
	W1213 10:38:10.710678  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:38:10.710684  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:38:10.710744  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:38:10.751941  947325 cri.go:89] found id: ""
	I1213 10:38:10.751955  947325 logs.go:282] 0 containers: []
	W1213 10:38:10.751962  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:38:10.751967  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:38:10.752027  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:38:10.784379  947325 cri.go:89] found id: ""
	I1213 10:38:10.784393  947325 logs.go:282] 0 containers: []
	W1213 10:38:10.784400  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:38:10.784405  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:38:10.784462  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:38:10.812194  947325 cri.go:89] found id: ""
	I1213 10:38:10.812208  947325 logs.go:282] 0 containers: []
	W1213 10:38:10.812215  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:38:10.812220  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:38:10.812279  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:38:10.837693  947325 cri.go:89] found id: ""
	I1213 10:38:10.837706  947325 logs.go:282] 0 containers: []
	W1213 10:38:10.837714  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:38:10.837721  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:38:10.837732  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:38:10.903946  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:38:10.903965  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:38:10.918956  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:38:10.918972  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:38:10.991627  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:38:10.983406   13089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:10.984077   13089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:10.985359   13089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:10.985915   13089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:10.987398   13089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:38:10.983406   13089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:10.984077   13089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:10.985359   13089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:10.985915   13089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:10.987398   13089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:38:10.991638  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:38:10.991648  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:38:11.064139  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:38:11.064160  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:38:13.600555  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:13.610666  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:38:13.610728  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:38:13.635608  947325 cri.go:89] found id: ""
	I1213 10:38:13.635622  947325 logs.go:282] 0 containers: []
	W1213 10:38:13.635629  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:38:13.635635  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:38:13.635694  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:38:13.660494  947325 cri.go:89] found id: ""
	I1213 10:38:13.660509  947325 logs.go:282] 0 containers: []
	W1213 10:38:13.660516  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:38:13.660521  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:38:13.660580  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:38:13.686792  947325 cri.go:89] found id: ""
	I1213 10:38:13.686807  947325 logs.go:282] 0 containers: []
	W1213 10:38:13.686814  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:38:13.686820  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:38:13.686877  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:38:13.712337  947325 cri.go:89] found id: ""
	I1213 10:38:13.712351  947325 logs.go:282] 0 containers: []
	W1213 10:38:13.712358  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:38:13.712364  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:38:13.712421  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:38:13.751688  947325 cri.go:89] found id: ""
	I1213 10:38:13.751703  947325 logs.go:282] 0 containers: []
	W1213 10:38:13.751710  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:38:13.751716  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:38:13.751771  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:38:13.778873  947325 cri.go:89] found id: ""
	I1213 10:38:13.778886  947325 logs.go:282] 0 containers: []
	W1213 10:38:13.778893  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:38:13.778898  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:38:13.778955  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:38:13.808036  947325 cri.go:89] found id: ""
	I1213 10:38:13.808050  947325 logs.go:282] 0 containers: []
	W1213 10:38:13.808057  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:38:13.808065  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:38:13.808081  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:38:13.874152  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:38:13.864618   13192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:13.865871   13192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:13.866606   13192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:13.868278   13192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:13.868976   13192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:38:13.864618   13192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:13.865871   13192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:13.866606   13192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:13.868278   13192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:13.868976   13192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:38:13.874162  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:38:13.874173  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:38:13.943404  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:38:13.943424  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:38:13.971540  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:38:13.971557  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:38:14.040558  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:38:14.040581  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:38:16.556175  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:16.566366  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:38:16.566428  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:38:16.591757  947325 cri.go:89] found id: ""
	I1213 10:38:16.591772  947325 logs.go:282] 0 containers: []
	W1213 10:38:16.591779  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:38:16.591785  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:38:16.591842  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:38:16.617244  947325 cri.go:89] found id: ""
	I1213 10:38:16.617259  947325 logs.go:282] 0 containers: []
	W1213 10:38:16.617266  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:38:16.617271  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:38:16.617329  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:38:16.644168  947325 cri.go:89] found id: ""
	I1213 10:38:16.644182  947325 logs.go:282] 0 containers: []
	W1213 10:38:16.644189  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:38:16.644194  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:38:16.644253  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:38:16.673646  947325 cri.go:89] found id: ""
	I1213 10:38:16.673659  947325 logs.go:282] 0 containers: []
	W1213 10:38:16.673666  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:38:16.673671  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:38:16.673729  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:38:16.698771  947325 cri.go:89] found id: ""
	I1213 10:38:16.698785  947325 logs.go:282] 0 containers: []
	W1213 10:38:16.698793  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:38:16.698798  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:38:16.698857  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:38:16.726980  947325 cri.go:89] found id: ""
	I1213 10:38:16.726994  947325 logs.go:282] 0 containers: []
	W1213 10:38:16.727001  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:38:16.727006  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:38:16.727066  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:38:16.773642  947325 cri.go:89] found id: ""
	I1213 10:38:16.773657  947325 logs.go:282] 0 containers: []
	W1213 10:38:16.773665  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:38:16.773673  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:38:16.773685  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:38:16.807643  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:38:16.807660  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:38:16.874674  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:38:16.874698  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:38:16.890281  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:38:16.890299  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:38:16.958318  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:38:16.949056   13311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:16.950510   13311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:16.951914   13311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:16.952759   13311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:16.954416   13311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:38:16.949056   13311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:16.950510   13311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:16.951914   13311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:16.952759   13311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:16.954416   13311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:38:16.958330  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:38:16.958343  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:38:19.528319  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:19.539728  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:38:19.539789  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:38:19.571108  947325 cri.go:89] found id: ""
	I1213 10:38:19.571121  947325 logs.go:282] 0 containers: []
	W1213 10:38:19.571129  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:38:19.571134  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:38:19.571194  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:38:19.597765  947325 cri.go:89] found id: ""
	I1213 10:38:19.597779  947325 logs.go:282] 0 containers: []
	W1213 10:38:19.597787  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:38:19.597792  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:38:19.597853  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:38:19.623110  947325 cri.go:89] found id: ""
	I1213 10:38:19.623124  947325 logs.go:282] 0 containers: []
	W1213 10:38:19.623137  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:38:19.623142  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:38:19.623204  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:38:19.648553  947325 cri.go:89] found id: ""
	I1213 10:38:19.648568  947325 logs.go:282] 0 containers: []
	W1213 10:38:19.648575  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:38:19.648580  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:38:19.648652  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:38:19.674550  947325 cri.go:89] found id: ""
	I1213 10:38:19.674565  947325 logs.go:282] 0 containers: []
	W1213 10:38:19.674572  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:38:19.674577  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:38:19.674635  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:38:19.704458  947325 cri.go:89] found id: ""
	I1213 10:38:19.704473  947325 logs.go:282] 0 containers: []
	W1213 10:38:19.704480  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:38:19.704486  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:38:19.704560  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:38:19.742545  947325 cri.go:89] found id: ""
	I1213 10:38:19.742559  947325 logs.go:282] 0 containers: []
	W1213 10:38:19.742566  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:38:19.742573  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:38:19.742584  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:38:19.818214  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:38:19.818236  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:38:19.833741  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:38:19.833757  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:38:19.899700  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:38:19.891381   13404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:19.892146   13404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:19.893293   13404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:19.893921   13404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:19.895742   13404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:38:19.891381   13404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:19.892146   13404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:19.893293   13404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:19.893921   13404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:19.895742   13404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:38:19.899710  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:38:19.899731  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:38:19.969264  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:38:19.969284  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:38:22.501918  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:22.513303  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:38:22.513368  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:38:22.542006  947325 cri.go:89] found id: ""
	I1213 10:38:22.542020  947325 logs.go:282] 0 containers: []
	W1213 10:38:22.542028  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:38:22.542033  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:38:22.542109  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:38:22.572046  947325 cri.go:89] found id: ""
	I1213 10:38:22.572061  947325 logs.go:282] 0 containers: []
	W1213 10:38:22.572068  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:38:22.572073  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:38:22.572131  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:38:22.599640  947325 cri.go:89] found id: ""
	I1213 10:38:22.599654  947325 logs.go:282] 0 containers: []
	W1213 10:38:22.599660  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:38:22.599665  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:38:22.599728  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:38:22.628632  947325 cri.go:89] found id: ""
	I1213 10:38:22.628646  947325 logs.go:282] 0 containers: []
	W1213 10:38:22.628653  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:38:22.628658  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:38:22.628717  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:38:22.655032  947325 cri.go:89] found id: ""
	I1213 10:38:22.655046  947325 logs.go:282] 0 containers: []
	W1213 10:38:22.655053  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:38:22.655058  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:38:22.655119  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:38:22.682403  947325 cri.go:89] found id: ""
	I1213 10:38:22.682422  947325 logs.go:282] 0 containers: []
	W1213 10:38:22.682431  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:38:22.682436  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:38:22.682511  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:38:22.709263  947325 cri.go:89] found id: ""
	I1213 10:38:22.709277  947325 logs.go:282] 0 containers: []
	W1213 10:38:22.709286  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:38:22.709293  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:38:22.709307  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:38:22.748554  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:38:22.748573  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:38:22.820355  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:38:22.820376  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:38:22.836069  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:38:22.836100  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:38:22.902594  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:38:22.894546   13522 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:22.895165   13522 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:22.896717   13522 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:22.897250   13522 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:22.898679   13522 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:38:22.894546   13522 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:22.895165   13522 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:22.896717   13522 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:22.897250   13522 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:22.898679   13522 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:38:22.902605  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:38:22.902616  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:38:25.474313  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:25.484536  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:38:25.484600  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:38:25.512648  947325 cri.go:89] found id: ""
	I1213 10:38:25.512662  947325 logs.go:282] 0 containers: []
	W1213 10:38:25.512670  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:38:25.512675  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:38:25.512736  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:38:25.545720  947325 cri.go:89] found id: ""
	I1213 10:38:25.545739  947325 logs.go:282] 0 containers: []
	W1213 10:38:25.545746  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:38:25.545752  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:38:25.545821  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:38:25.572807  947325 cri.go:89] found id: ""
	I1213 10:38:25.572820  947325 logs.go:282] 0 containers: []
	W1213 10:38:25.572827  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:38:25.572832  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:38:25.572890  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:38:25.597850  947325 cri.go:89] found id: ""
	I1213 10:38:25.597864  947325 logs.go:282] 0 containers: []
	W1213 10:38:25.597871  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:38:25.597876  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:38:25.597939  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:38:25.622944  947325 cri.go:89] found id: ""
	I1213 10:38:25.622958  947325 logs.go:282] 0 containers: []
	W1213 10:38:25.622965  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:38:25.622971  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:38:25.623030  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:38:25.647255  947325 cri.go:89] found id: ""
	I1213 10:38:25.647268  947325 logs.go:282] 0 containers: []
	W1213 10:38:25.647276  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:38:25.647281  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:38:25.647339  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:38:25.672821  947325 cri.go:89] found id: ""
	I1213 10:38:25.672837  947325 logs.go:282] 0 containers: []
	W1213 10:38:25.672844  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:38:25.672864  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:38:25.672875  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:38:25.744377  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:38:25.744397  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:38:25.773682  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:38:25.773699  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:38:25.843372  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:38:25.843396  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:38:25.858420  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:38:25.858437  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:38:25.923733  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:38:25.915727   13633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:25.916379   13633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:25.917934   13633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:25.918499   13633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:25.919915   13633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:38:25.915727   13633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:25.916379   13633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:25.917934   13633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:25.918499   13633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:25.919915   13633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:38:28.424008  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:28.434425  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:38:28.434490  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:38:28.459479  947325 cri.go:89] found id: ""
	I1213 10:38:28.459493  947325 logs.go:282] 0 containers: []
	W1213 10:38:28.459501  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:38:28.459506  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:38:28.459569  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:38:28.488343  947325 cri.go:89] found id: ""
	I1213 10:38:28.488357  947325 logs.go:282] 0 containers: []
	W1213 10:38:28.488365  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:38:28.488370  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:38:28.488431  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:38:28.513634  947325 cri.go:89] found id: ""
	I1213 10:38:28.513649  947325 logs.go:282] 0 containers: []
	W1213 10:38:28.513656  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:38:28.513661  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:38:28.513719  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:38:28.540169  947325 cri.go:89] found id: ""
	I1213 10:38:28.540182  947325 logs.go:282] 0 containers: []
	W1213 10:38:28.540190  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:38:28.540195  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:38:28.540253  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:38:28.564331  947325 cri.go:89] found id: ""
	I1213 10:38:28.564344  947325 logs.go:282] 0 containers: []
	W1213 10:38:28.564351  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:38:28.564356  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:38:28.564415  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:38:28.592829  947325 cri.go:89] found id: ""
	I1213 10:38:28.592844  947325 logs.go:282] 0 containers: []
	W1213 10:38:28.592851  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:38:28.592856  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:38:28.592913  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:38:28.618020  947325 cri.go:89] found id: ""
	I1213 10:38:28.618035  947325 logs.go:282] 0 containers: []
	W1213 10:38:28.618044  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:38:28.618052  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:38:28.618063  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:38:28.685306  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:38:28.685326  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:38:28.713761  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:38:28.713779  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:38:28.794463  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:38:28.794484  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:38:28.809677  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:38:28.809696  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:38:28.870924  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:38:28.863257   13740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:28.863803   13740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:28.864955   13740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:28.865616   13740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:28.867097   13740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:38:28.863257   13740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:28.863803   13740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:28.864955   13740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:28.865616   13740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:28.867097   13740 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:38:31.371199  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:31.381501  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:38:31.381583  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:38:31.408362  947325 cri.go:89] found id: ""
	I1213 10:38:31.408376  947325 logs.go:282] 0 containers: []
	W1213 10:38:31.408383  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:38:31.408388  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:38:31.408454  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:38:31.434743  947325 cri.go:89] found id: ""
	I1213 10:38:31.434758  947325 logs.go:282] 0 containers: []
	W1213 10:38:31.434766  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:38:31.434772  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:38:31.434831  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:38:31.467710  947325 cri.go:89] found id: ""
	I1213 10:38:31.467724  947325 logs.go:282] 0 containers: []
	W1213 10:38:31.467731  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:38:31.467736  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:38:31.467795  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:38:31.493177  947325 cri.go:89] found id: ""
	I1213 10:38:31.493191  947325 logs.go:282] 0 containers: []
	W1213 10:38:31.493198  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:38:31.493203  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:38:31.493263  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:38:31.517966  947325 cri.go:89] found id: ""
	I1213 10:38:31.517980  947325 logs.go:282] 0 containers: []
	W1213 10:38:31.517987  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:38:31.517992  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:38:31.518057  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:38:31.542186  947325 cri.go:89] found id: ""
	I1213 10:38:31.542201  947325 logs.go:282] 0 containers: []
	W1213 10:38:31.542208  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:38:31.542213  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:38:31.542270  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:38:31.567569  947325 cri.go:89] found id: ""
	I1213 10:38:31.567583  947325 logs.go:282] 0 containers: []
	W1213 10:38:31.567590  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:38:31.567598  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:38:31.567609  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:38:31.633128  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:38:31.633147  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:38:31.647898  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:38:31.647916  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:38:31.713585  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:38:31.704990   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:31.706015   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:31.707614   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:31.708200   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:31.709708   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:38:31.704990   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:31.706015   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:31.707614   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:31.708200   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:31.709708   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:38:31.713595  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:38:31.713606  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:38:31.784338  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:38:31.784357  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:38:34.315454  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:34.327061  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:38:34.327130  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:38:34.356795  947325 cri.go:89] found id: ""
	I1213 10:38:34.356809  947325 logs.go:282] 0 containers: []
	W1213 10:38:34.356817  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:38:34.356822  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:38:34.356892  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:38:34.384789  947325 cri.go:89] found id: ""
	I1213 10:38:34.384804  947325 logs.go:282] 0 containers: []
	W1213 10:38:34.384812  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:38:34.384817  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:38:34.384907  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:38:34.410778  947325 cri.go:89] found id: ""
	I1213 10:38:34.410791  947325 logs.go:282] 0 containers: []
	W1213 10:38:34.410799  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:38:34.410804  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:38:34.410861  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:38:34.440426  947325 cri.go:89] found id: ""
	I1213 10:38:34.440440  947325 logs.go:282] 0 containers: []
	W1213 10:38:34.440454  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:38:34.440459  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:38:34.440514  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:38:34.465148  947325 cri.go:89] found id: ""
	I1213 10:38:34.465162  947325 logs.go:282] 0 containers: []
	W1213 10:38:34.465170  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:38:34.465175  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:38:34.465236  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:38:34.491230  947325 cri.go:89] found id: ""
	I1213 10:38:34.491245  947325 logs.go:282] 0 containers: []
	W1213 10:38:34.491253  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:38:34.491259  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:38:34.491364  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:38:34.520190  947325 cri.go:89] found id: ""
	I1213 10:38:34.520205  947325 logs.go:282] 0 containers: []
	W1213 10:38:34.520213  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:38:34.520220  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:38:34.520235  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:38:34.552635  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:38:34.552652  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:38:34.617894  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:38:34.617914  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:38:34.632507  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:38:34.632528  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:38:34.697693  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:38:34.688967   13939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:34.689672   13939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:34.691242   13939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:34.691552   13939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:34.693083   13939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:38:34.688967   13939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:34.689672   13939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:34.691242   13939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:34.691552   13939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:34.693083   13939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:38:34.697704  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:38:34.697715  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:38:37.276776  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:37.287236  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:38:37.287306  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:38:37.314091  947325 cri.go:89] found id: ""
	I1213 10:38:37.314105  947325 logs.go:282] 0 containers: []
	W1213 10:38:37.314112  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:38:37.314118  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:38:37.314180  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:38:37.343079  947325 cri.go:89] found id: ""
	I1213 10:38:37.343092  947325 logs.go:282] 0 containers: []
	W1213 10:38:37.343099  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:38:37.343104  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:38:37.343162  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:38:37.371406  947325 cri.go:89] found id: ""
	I1213 10:38:37.371420  947325 logs.go:282] 0 containers: []
	W1213 10:38:37.371428  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:38:37.371432  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:38:37.371489  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:38:37.400383  947325 cri.go:89] found id: ""
	I1213 10:38:37.400398  947325 logs.go:282] 0 containers: []
	W1213 10:38:37.400405  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:38:37.400415  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:38:37.400473  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:38:37.432217  947325 cri.go:89] found id: ""
	I1213 10:38:37.432232  947325 logs.go:282] 0 containers: []
	W1213 10:38:37.432240  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:38:37.432245  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:38:37.432306  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:38:37.459687  947325 cri.go:89] found id: ""
	I1213 10:38:37.459701  947325 logs.go:282] 0 containers: []
	W1213 10:38:37.459708  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:38:37.459713  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:38:37.459771  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:38:37.491295  947325 cri.go:89] found id: ""
	I1213 10:38:37.491309  947325 logs.go:282] 0 containers: []
	W1213 10:38:37.491316  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:38:37.491324  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:38:37.491335  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:38:37.569044  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:38:37.569068  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:38:37.598399  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:38:37.598416  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:38:37.669854  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:38:37.669873  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:38:37.685001  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:38:37.685024  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:38:37.764039  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:38:37.754418   14047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:37.755520   14047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:37.757588   14047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:37.758501   14047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:37.759525   14047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:38:37.754418   14047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:37.755520   14047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:37.757588   14047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:37.758501   14047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:37.759525   14047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:38:40.265130  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:40.276597  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:38:40.276660  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:38:40.301799  947325 cri.go:89] found id: ""
	I1213 10:38:40.301815  947325 logs.go:282] 0 containers: []
	W1213 10:38:40.301822  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:38:40.301828  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:38:40.301884  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:38:40.328096  947325 cri.go:89] found id: ""
	I1213 10:38:40.328110  947325 logs.go:282] 0 containers: []
	W1213 10:38:40.328117  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:38:40.328122  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:38:40.328180  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:38:40.352505  947325 cri.go:89] found id: ""
	I1213 10:38:40.352520  947325 logs.go:282] 0 containers: []
	W1213 10:38:40.352527  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:38:40.352532  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:38:40.352592  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:38:40.381218  947325 cri.go:89] found id: ""
	I1213 10:38:40.381233  947325 logs.go:282] 0 containers: []
	W1213 10:38:40.381240  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:38:40.381245  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:38:40.381303  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:38:40.406747  947325 cri.go:89] found id: ""
	I1213 10:38:40.406761  947325 logs.go:282] 0 containers: []
	W1213 10:38:40.406769  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:38:40.406774  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:38:40.406836  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:38:40.432179  947325 cri.go:89] found id: ""
	I1213 10:38:40.432193  947325 logs.go:282] 0 containers: []
	W1213 10:38:40.432200  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:38:40.432230  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:38:40.432294  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:38:40.457241  947325 cri.go:89] found id: ""
	I1213 10:38:40.457256  947325 logs.go:282] 0 containers: []
	W1213 10:38:40.457263  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:38:40.457270  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:38:40.457281  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:38:40.485384  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:38:40.485400  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:38:40.553931  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:38:40.553950  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:38:40.568552  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:38:40.568568  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:38:40.631691  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:38:40.623997   14152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:40.624643   14152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:40.626097   14152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:40.626582   14152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:40.628021   14152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:38:40.623997   14152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:40.624643   14152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:40.626097   14152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:40.626582   14152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:40.628021   14152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:38:40.631701  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:38:40.631711  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:38:43.202405  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:43.212618  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:38:43.212681  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:38:43.237960  947325 cri.go:89] found id: ""
	I1213 10:38:43.237975  947325 logs.go:282] 0 containers: []
	W1213 10:38:43.237981  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:38:43.237986  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:38:43.238046  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:38:43.262400  947325 cri.go:89] found id: ""
	I1213 10:38:43.262415  947325 logs.go:282] 0 containers: []
	W1213 10:38:43.262422  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:38:43.262427  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:38:43.262485  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:38:43.287113  947325 cri.go:89] found id: ""
	I1213 10:38:43.287126  947325 logs.go:282] 0 containers: []
	W1213 10:38:43.287133  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:38:43.287138  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:38:43.287194  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:38:43.311437  947325 cri.go:89] found id: ""
	I1213 10:38:43.311451  947325 logs.go:282] 0 containers: []
	W1213 10:38:43.311459  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:38:43.311464  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:38:43.311520  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:38:43.338038  947325 cri.go:89] found id: ""
	I1213 10:38:43.338052  947325 logs.go:282] 0 containers: []
	W1213 10:38:43.338059  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:38:43.338066  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:38:43.338125  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:38:43.363248  947325 cri.go:89] found id: ""
	I1213 10:38:43.363262  947325 logs.go:282] 0 containers: []
	W1213 10:38:43.363269  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:38:43.363274  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:38:43.363331  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:38:43.388331  947325 cri.go:89] found id: ""
	I1213 10:38:43.388346  947325 logs.go:282] 0 containers: []
	W1213 10:38:43.388353  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:38:43.388361  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:38:43.388371  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:38:43.456040  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:38:43.448208   14240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:43.448885   14240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:43.450561   14240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:43.451211   14240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:43.452293   14240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:38:43.448208   14240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:43.448885   14240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:43.450561   14240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:43.451211   14240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:43.452293   14240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:38:43.456051  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:38:43.456062  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:38:43.529676  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:38:43.529697  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:38:43.557667  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:38:43.557683  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:38:43.626256  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:38:43.626276  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:38:46.141151  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:46.151629  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:38:46.151691  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:38:46.177078  947325 cri.go:89] found id: ""
	I1213 10:38:46.177092  947325 logs.go:282] 0 containers: []
	W1213 10:38:46.177099  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:38:46.177104  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:38:46.177163  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:38:46.203681  947325 cri.go:89] found id: ""
	I1213 10:38:46.203695  947325 logs.go:282] 0 containers: []
	W1213 10:38:46.203702  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:38:46.203707  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:38:46.203765  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:38:46.228801  947325 cri.go:89] found id: ""
	I1213 10:38:46.228815  947325 logs.go:282] 0 containers: []
	W1213 10:38:46.228823  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:38:46.228828  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:38:46.228892  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:38:46.254742  947325 cri.go:89] found id: ""
	I1213 10:38:46.254756  947325 logs.go:282] 0 containers: []
	W1213 10:38:46.254763  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:38:46.254768  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:38:46.254825  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:38:46.286504  947325 cri.go:89] found id: ""
	I1213 10:38:46.286522  947325 logs.go:282] 0 containers: []
	W1213 10:38:46.286529  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:38:46.286534  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:38:46.286596  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:38:46.311507  947325 cri.go:89] found id: ""
	I1213 10:38:46.311523  947325 logs.go:282] 0 containers: []
	W1213 10:38:46.311531  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:38:46.311536  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:38:46.311599  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:38:46.340455  947325 cri.go:89] found id: ""
	I1213 10:38:46.340469  947325 logs.go:282] 0 containers: []
	W1213 10:38:46.340477  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:38:46.340496  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:38:46.340508  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:38:46.410798  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:38:46.410817  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:38:46.425740  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:38:46.425758  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:38:46.488528  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:38:46.479589   14352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:46.480382   14352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:46.482285   14352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:46.482891   14352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:46.484595   14352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:38:46.479589   14352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:46.480382   14352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:46.482285   14352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:46.482891   14352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:46.484595   14352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:38:46.488537  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:38:46.488549  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:38:46.558649  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:38:46.558668  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:38:49.089125  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:49.099199  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:38:49.099261  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:38:49.128242  947325 cri.go:89] found id: ""
	I1213 10:38:49.128256  947325 logs.go:282] 0 containers: []
	W1213 10:38:49.128263  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:38:49.128268  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:38:49.128328  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:38:49.154103  947325 cri.go:89] found id: ""
	I1213 10:38:49.154117  947325 logs.go:282] 0 containers: []
	W1213 10:38:49.154124  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:38:49.154129  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:38:49.154189  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:38:49.178738  947325 cri.go:89] found id: ""
	I1213 10:38:49.178754  947325 logs.go:282] 0 containers: []
	W1213 10:38:49.178762  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:38:49.178767  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:38:49.178824  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:38:49.203209  947325 cri.go:89] found id: ""
	I1213 10:38:49.203223  947325 logs.go:282] 0 containers: []
	W1213 10:38:49.203230  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:38:49.203235  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:38:49.203290  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:38:49.228158  947325 cri.go:89] found id: ""
	I1213 10:38:49.228174  947325 logs.go:282] 0 containers: []
	W1213 10:38:49.228181  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:38:49.228186  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:38:49.228245  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:38:49.257410  947325 cri.go:89] found id: ""
	I1213 10:38:49.257425  947325 logs.go:282] 0 containers: []
	W1213 10:38:49.257432  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:38:49.257437  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:38:49.257503  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:38:49.284405  947325 cri.go:89] found id: ""
	I1213 10:38:49.284419  947325 logs.go:282] 0 containers: []
	W1213 10:38:49.284428  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:38:49.284436  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:38:49.284447  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:38:49.350814  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:38:49.350834  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:38:49.365897  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:38:49.365914  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:38:49.428434  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:38:49.419689   14458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:49.420411   14458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:49.422194   14458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:49.422785   14458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:49.424440   14458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:38:49.419689   14458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:49.420411   14458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:49.422194   14458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:49.422785   14458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:49.424440   14458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:38:49.428445  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:38:49.428455  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:38:49.497319  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:38:49.497338  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:38:52.026790  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:52.037493  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:38:52.037629  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:38:52.065928  947325 cri.go:89] found id: ""
	I1213 10:38:52.065942  947325 logs.go:282] 0 containers: []
	W1213 10:38:52.065959  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:38:52.065966  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:38:52.066030  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:38:52.093348  947325 cri.go:89] found id: ""
	I1213 10:38:52.093377  947325 logs.go:282] 0 containers: []
	W1213 10:38:52.093385  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:38:52.093391  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:38:52.093461  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:38:52.120408  947325 cri.go:89] found id: ""
	I1213 10:38:52.120438  947325 logs.go:282] 0 containers: []
	W1213 10:38:52.120446  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:38:52.120451  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:38:52.120520  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:38:52.151619  947325 cri.go:89] found id: ""
	I1213 10:38:52.151633  947325 logs.go:282] 0 containers: []
	W1213 10:38:52.151640  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:38:52.151645  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:38:52.151709  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:38:52.181293  947325 cri.go:89] found id: ""
	I1213 10:38:52.181307  947325 logs.go:282] 0 containers: []
	W1213 10:38:52.181314  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:38:52.181319  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:38:52.181381  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:38:52.207056  947325 cri.go:89] found id: ""
	I1213 10:38:52.207073  947325 logs.go:282] 0 containers: []
	W1213 10:38:52.207080  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:38:52.207085  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:38:52.207144  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:38:52.232482  947325 cri.go:89] found id: ""
	I1213 10:38:52.232495  947325 logs.go:282] 0 containers: []
	W1213 10:38:52.232503  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:38:52.232511  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:38:52.232523  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:38:52.298884  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:38:52.298908  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:38:52.314165  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:38:52.314184  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:38:52.379432  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:38:52.370728   14563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:52.371164   14563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:52.372944   14563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:52.373396   14563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:52.375062   14563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:38:52.370728   14563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:52.371164   14563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:52.372944   14563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:52.373396   14563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:52.375062   14563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:38:52.379442  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:38:52.379454  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:38:52.447720  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:38:52.447739  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:38:54.981781  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:54.994265  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:38:54.994331  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:38:55.034512  947325 cri.go:89] found id: ""
	I1213 10:38:55.034527  947325 logs.go:282] 0 containers: []
	W1213 10:38:55.034535  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:38:55.034541  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:38:55.034603  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:38:55.064371  947325 cri.go:89] found id: ""
	I1213 10:38:55.064385  947325 logs.go:282] 0 containers: []
	W1213 10:38:55.064393  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:38:55.064399  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:38:55.064464  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:38:55.094614  947325 cri.go:89] found id: ""
	I1213 10:38:55.094628  947325 logs.go:282] 0 containers: []
	W1213 10:38:55.094635  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:38:55.094640  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:38:55.094703  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:38:55.122445  947325 cri.go:89] found id: ""
	I1213 10:38:55.122469  947325 logs.go:282] 0 containers: []
	W1213 10:38:55.122476  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:38:55.122482  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:38:55.122565  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:38:55.149483  947325 cri.go:89] found id: ""
	I1213 10:38:55.149497  947325 logs.go:282] 0 containers: []
	W1213 10:38:55.149505  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:38:55.149510  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:38:55.149608  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:38:55.177190  947325 cri.go:89] found id: ""
	I1213 10:38:55.177204  947325 logs.go:282] 0 containers: []
	W1213 10:38:55.177211  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:38:55.177216  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:38:55.177276  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:38:55.205792  947325 cri.go:89] found id: ""
	I1213 10:38:55.205805  947325 logs.go:282] 0 containers: []
	W1213 10:38:55.205813  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:38:55.205820  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:38:55.205831  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:38:55.274521  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:38:55.274543  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:38:55.303850  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:38:55.303867  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:38:55.372053  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:38:55.372072  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:38:55.386741  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:38:55.386757  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:38:55.453760  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:38:55.443866   14678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:55.444485   14678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:55.446205   14678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:55.448348   14678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:55.448876   14678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:38:55.443866   14678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:55.444485   14678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:55.446205   14678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:55.448348   14678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:55.448876   14678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:38:57.954020  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:38:57.964050  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:38:57.964109  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:38:57.998468  947325 cri.go:89] found id: ""
	I1213 10:38:57.998484  947325 logs.go:282] 0 containers: []
	W1213 10:38:57.998492  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:38:57.998497  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:38:57.998564  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:38:58.035565  947325 cri.go:89] found id: ""
	I1213 10:38:58.035580  947325 logs.go:282] 0 containers: []
	W1213 10:38:58.035587  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:38:58.035592  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:38:58.035654  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:38:58.066882  947325 cri.go:89] found id: ""
	I1213 10:38:58.066903  947325 logs.go:282] 0 containers: []
	W1213 10:38:58.066912  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:38:58.066917  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:38:58.066978  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:38:58.092977  947325 cri.go:89] found id: ""
	I1213 10:38:58.093007  947325 logs.go:282] 0 containers: []
	W1213 10:38:58.093014  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:38:58.093019  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:38:58.093088  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:38:58.123222  947325 cri.go:89] found id: ""
	I1213 10:38:58.123235  947325 logs.go:282] 0 containers: []
	W1213 10:38:58.123243  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:38:58.123248  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:38:58.123311  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:38:58.148191  947325 cri.go:89] found id: ""
	I1213 10:38:58.148204  947325 logs.go:282] 0 containers: []
	W1213 10:38:58.148211  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:38:58.148226  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:38:58.148283  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:38:58.174245  947325 cri.go:89] found id: ""
	I1213 10:38:58.174259  947325 logs.go:282] 0 containers: []
	W1213 10:38:58.174266  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:38:58.174274  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:38:58.174286  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:38:58.238353  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:38:58.230226   14764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:58.230884   14764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:58.232487   14764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:58.232939   14764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:58.234404   14764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:38:58.230226   14764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:58.230884   14764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:58.232487   14764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:58.232939   14764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:38:58.234404   14764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:38:58.238363  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:38:58.238374  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:38:58.310390  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:38:58.310414  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:38:58.339218  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:38:58.339235  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:38:58.411033  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:38:58.411053  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:00.926322  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:00.937217  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:00.937279  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:00.963631  947325 cri.go:89] found id: ""
	I1213 10:39:00.963645  947325 logs.go:282] 0 containers: []
	W1213 10:39:00.963653  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:00.963658  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:39:00.963720  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:00.992312  947325 cri.go:89] found id: ""
	I1213 10:39:00.992327  947325 logs.go:282] 0 containers: []
	W1213 10:39:00.992334  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:39:00.992340  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:39:00.992402  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:01.019653  947325 cri.go:89] found id: ""
	I1213 10:39:01.019667  947325 logs.go:282] 0 containers: []
	W1213 10:39:01.019674  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:39:01.019679  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:01.019737  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:01.048197  947325 cri.go:89] found id: ""
	I1213 10:39:01.048211  947325 logs.go:282] 0 containers: []
	W1213 10:39:01.048218  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:01.048224  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:01.048278  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:01.077274  947325 cri.go:89] found id: ""
	I1213 10:39:01.077288  947325 logs.go:282] 0 containers: []
	W1213 10:39:01.077296  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:01.077301  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:01.077359  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:01.102210  947325 cri.go:89] found id: ""
	I1213 10:39:01.102225  947325 logs.go:282] 0 containers: []
	W1213 10:39:01.102232  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:01.102237  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:01.102296  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:01.127343  947325 cri.go:89] found id: ""
	I1213 10:39:01.127357  947325 logs.go:282] 0 containers: []
	W1213 10:39:01.127364  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:01.127372  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:01.127384  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:01.193045  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:01.184559   14862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:01.185631   14862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:01.186444   14862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:01.187426   14862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:01.187971   14862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:01.184559   14862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:01.185631   14862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:01.186444   14862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:01.187426   14862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:01.187971   14862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:01.193056  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:39:01.193066  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:39:01.263652  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:39:01.263672  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:01.300661  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:01.300679  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:01.369051  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:01.369070  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:03.885575  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:03.895834  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:03.895898  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:03.926313  947325 cri.go:89] found id: ""
	I1213 10:39:03.926327  947325 logs.go:282] 0 containers: []
	W1213 10:39:03.926335  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:03.926339  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:39:03.926396  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:03.954240  947325 cri.go:89] found id: ""
	I1213 10:39:03.954254  947325 logs.go:282] 0 containers: []
	W1213 10:39:03.954261  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:39:03.954266  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:39:03.954324  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:03.984134  947325 cri.go:89] found id: ""
	I1213 10:39:03.984148  947325 logs.go:282] 0 containers: []
	W1213 10:39:03.984154  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:39:03.984159  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:03.984224  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:04.016879  947325 cri.go:89] found id: ""
	I1213 10:39:04.016894  947325 logs.go:282] 0 containers: []
	W1213 10:39:04.016901  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:04.016906  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:04.016965  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:04.041176  947325 cri.go:89] found id: ""
	I1213 10:39:04.041190  947325 logs.go:282] 0 containers: []
	W1213 10:39:04.041203  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:04.041208  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:04.041267  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:04.066331  947325 cri.go:89] found id: ""
	I1213 10:39:04.066345  947325 logs.go:282] 0 containers: []
	W1213 10:39:04.066351  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:04.066357  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:04.066415  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:04.090857  947325 cri.go:89] found id: ""
	I1213 10:39:04.090886  947325 logs.go:282] 0 containers: []
	W1213 10:39:04.090895  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:04.090903  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:04.090917  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:04.156570  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:04.156590  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:04.171387  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:04.171404  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:04.240263  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:04.226379   14974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:04.227108   14974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:04.228895   14974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:04.229425   14974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:04.230990   14974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:04.226379   14974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:04.227108   14974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:04.228895   14974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:04.229425   14974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:04.230990   14974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:04.240273  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:39:04.240285  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:39:04.319651  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:39:04.319672  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:06.852882  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:06.864121  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:06.864186  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:06.890733  947325 cri.go:89] found id: ""
	I1213 10:39:06.890748  947325 logs.go:282] 0 containers: []
	W1213 10:39:06.890756  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:06.890761  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:39:06.890819  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:06.917207  947325 cri.go:89] found id: ""
	I1213 10:39:06.917222  947325 logs.go:282] 0 containers: []
	W1213 10:39:06.917228  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:39:06.917234  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:39:06.917291  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:06.943186  947325 cri.go:89] found id: ""
	I1213 10:39:06.943201  947325 logs.go:282] 0 containers: []
	W1213 10:39:06.943208  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:39:06.943213  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:06.943278  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:06.973557  947325 cri.go:89] found id: ""
	I1213 10:39:06.973571  947325 logs.go:282] 0 containers: []
	W1213 10:39:06.973579  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:06.973584  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:06.973641  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:07.004748  947325 cri.go:89] found id: ""
	I1213 10:39:07.004770  947325 logs.go:282] 0 containers: []
	W1213 10:39:07.004778  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:07.004783  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:07.004851  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:07.030997  947325 cri.go:89] found id: ""
	I1213 10:39:07.031011  947325 logs.go:282] 0 containers: []
	W1213 10:39:07.031019  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:07.031024  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:07.031080  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:07.055983  947325 cri.go:89] found id: ""
	I1213 10:39:07.055997  947325 logs.go:282] 0 containers: []
	W1213 10:39:07.056004  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:07.056012  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:39:07.056024  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:07.084902  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:07.084919  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:07.153213  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:07.153232  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:07.168429  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:07.168446  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:07.232563  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:07.223603   15088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:07.224430   15088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:07.226089   15088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:07.226414   15088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:07.227903   15088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:07.223603   15088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:07.224430   15088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:07.226089   15088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:07.226414   15088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:07.227903   15088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:07.232586  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:39:07.232598  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:39:09.804561  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:09.814452  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:09.814514  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:09.840080  947325 cri.go:89] found id: ""
	I1213 10:39:09.840093  947325 logs.go:282] 0 containers: []
	W1213 10:39:09.840101  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:09.840106  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:39:09.840170  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:09.864603  947325 cri.go:89] found id: ""
	I1213 10:39:09.864617  947325 logs.go:282] 0 containers: []
	W1213 10:39:09.864625  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:39:09.864630  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:39:09.864697  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:09.889079  947325 cri.go:89] found id: ""
	I1213 10:39:09.889093  947325 logs.go:282] 0 containers: []
	W1213 10:39:09.889101  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:39:09.889106  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:09.889162  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:09.915869  947325 cri.go:89] found id: ""
	I1213 10:39:09.915883  947325 logs.go:282] 0 containers: []
	W1213 10:39:09.915890  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:09.915895  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:09.915954  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:09.945590  947325 cri.go:89] found id: ""
	I1213 10:39:09.945603  947325 logs.go:282] 0 containers: []
	W1213 10:39:09.945610  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:09.945618  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:09.945678  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:09.971712  947325 cri.go:89] found id: ""
	I1213 10:39:09.971725  947325 logs.go:282] 0 containers: []
	W1213 10:39:09.971732  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:09.971737  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:09.971798  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:10.003581  947325 cri.go:89] found id: ""
	I1213 10:39:10.003600  947325 logs.go:282] 0 containers: []
	W1213 10:39:10.003608  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:10.003618  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:39:10.003633  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:39:10.077821  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:39:10.077842  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:10.108375  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:10.108392  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:10.178400  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:10.178420  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:10.193608  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:10.193647  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:10.270772  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:10.262269   15196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:10.263276   15196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:10.265019   15196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:10.265328   15196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:10.266816   15196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:10.262269   15196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:10.263276   15196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:10.265019   15196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:10.265328   15196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:10.266816   15196 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:12.771904  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:12.782049  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:12.782110  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:12.806673  947325 cri.go:89] found id: ""
	I1213 10:39:12.806687  947325 logs.go:282] 0 containers: []
	W1213 10:39:12.806695  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:12.806700  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:39:12.806757  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:12.835814  947325 cri.go:89] found id: ""
	I1213 10:39:12.835829  947325 logs.go:282] 0 containers: []
	W1213 10:39:12.835836  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:39:12.835841  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:39:12.835898  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:12.861712  947325 cri.go:89] found id: ""
	I1213 10:39:12.861727  947325 logs.go:282] 0 containers: []
	W1213 10:39:12.861734  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:39:12.861740  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:12.861804  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:12.886652  947325 cri.go:89] found id: ""
	I1213 10:39:12.886666  947325 logs.go:282] 0 containers: []
	W1213 10:39:12.886673  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:12.886678  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:12.886736  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:12.916010  947325 cri.go:89] found id: ""
	I1213 10:39:12.916025  947325 logs.go:282] 0 containers: []
	W1213 10:39:12.916032  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:12.916037  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:12.916100  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:12.946655  947325 cri.go:89] found id: ""
	I1213 10:39:12.946672  947325 logs.go:282] 0 containers: []
	W1213 10:39:12.946679  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:12.946684  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:12.946748  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:12.976684  947325 cri.go:89] found id: ""
	I1213 10:39:12.976698  947325 logs.go:282] 0 containers: []
	W1213 10:39:12.976705  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:12.976713  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:12.976726  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:13.043449  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:13.043472  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:13.059281  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:13.059299  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:13.122969  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:13.114879   15289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:13.115451   15289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:13.117021   15289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:13.117507   15289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:13.119078   15289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:13.114879   15289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:13.115451   15289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:13.117021   15289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:13.117507   15289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:13.119078   15289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:13.122981  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:39:13.122991  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:39:13.193301  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:39:13.193322  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:15.728135  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:15.739049  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:15.739110  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:15.764321  947325 cri.go:89] found id: ""
	I1213 10:39:15.764335  947325 logs.go:282] 0 containers: []
	W1213 10:39:15.764342  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:15.764348  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:39:15.764410  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:15.794053  947325 cri.go:89] found id: ""
	I1213 10:39:15.794068  947325 logs.go:282] 0 containers: []
	W1213 10:39:15.794077  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:39:15.794083  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:39:15.794138  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:15.819708  947325 cri.go:89] found id: ""
	I1213 10:39:15.819721  947325 logs.go:282] 0 containers: []
	W1213 10:39:15.819729  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:39:15.819734  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:15.819793  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:15.850534  947325 cri.go:89] found id: ""
	I1213 10:39:15.850548  947325 logs.go:282] 0 containers: []
	W1213 10:39:15.850556  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:15.850561  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:15.850618  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:15.879609  947325 cri.go:89] found id: ""
	I1213 10:39:15.879623  947325 logs.go:282] 0 containers: []
	W1213 10:39:15.879631  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:15.879636  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:15.879700  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:15.908873  947325 cri.go:89] found id: ""
	I1213 10:39:15.908887  947325 logs.go:282] 0 containers: []
	W1213 10:39:15.908895  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:15.908901  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:15.908967  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:15.936537  947325 cri.go:89] found id: ""
	I1213 10:39:15.936552  947325 logs.go:282] 0 containers: []
	W1213 10:39:15.936559  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:15.936567  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:39:15.936580  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:39:16.005668  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:39:16.005690  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:16.036804  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:16.036822  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:16.105762  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:16.105780  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:16.121830  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:16.121849  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:16.189324  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:16.180755   15407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:16.181397   15407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:16.183115   15407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:16.183776   15407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:16.185271   15407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:16.180755   15407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:16.181397   15407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:16.183115   15407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:16.183776   15407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:16.185271   15407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:18.689610  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:18.699729  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:18.699788  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:18.725083  947325 cri.go:89] found id: ""
	I1213 10:39:18.725097  947325 logs.go:282] 0 containers: []
	W1213 10:39:18.725105  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:18.725110  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:39:18.725165  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:18.751300  947325 cri.go:89] found id: ""
	I1213 10:39:18.751315  947325 logs.go:282] 0 containers: []
	W1213 10:39:18.751327  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:39:18.751333  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:39:18.751390  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:18.776458  947325 cri.go:89] found id: ""
	I1213 10:39:18.776473  947325 logs.go:282] 0 containers: []
	W1213 10:39:18.776480  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:39:18.776485  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:18.776543  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:18.801403  947325 cri.go:89] found id: ""
	I1213 10:39:18.801416  947325 logs.go:282] 0 containers: []
	W1213 10:39:18.801423  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:18.801428  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:18.801488  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:18.828035  947325 cri.go:89] found id: ""
	I1213 10:39:18.828053  947325 logs.go:282] 0 containers: []
	W1213 10:39:18.828060  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:18.828065  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:18.828122  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:18.852563  947325 cri.go:89] found id: ""
	I1213 10:39:18.852577  947325 logs.go:282] 0 containers: []
	W1213 10:39:18.852583  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:18.852589  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:18.852647  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:18.879882  947325 cri.go:89] found id: ""
	I1213 10:39:18.879897  947325 logs.go:282] 0 containers: []
	W1213 10:39:18.879904  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:18.879912  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:39:18.879922  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:18.913762  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:18.913788  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:18.978817  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:18.978840  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:18.994917  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:18.994936  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:19.062190  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:19.054243   15510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:19.054818   15510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:19.056322   15510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:19.056831   15510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:19.058280   15510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:19.054243   15510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:19.054818   15510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:19.056322   15510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:19.056831   15510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:19.058280   15510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:19.062201  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:39:19.062213  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:39:21.629331  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:21.639522  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:21.639593  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:21.664074  947325 cri.go:89] found id: ""
	I1213 10:39:21.664089  947325 logs.go:282] 0 containers: []
	W1213 10:39:21.664097  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:21.664102  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:39:21.664164  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:21.689123  947325 cri.go:89] found id: ""
	I1213 10:39:21.689136  947325 logs.go:282] 0 containers: []
	W1213 10:39:21.689144  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:39:21.689149  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:39:21.689206  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:21.713736  947325 cri.go:89] found id: ""
	I1213 10:39:21.713750  947325 logs.go:282] 0 containers: []
	W1213 10:39:21.713758  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:39:21.713762  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:21.713817  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:21.741978  947325 cri.go:89] found id: ""
	I1213 10:39:21.741991  947325 logs.go:282] 0 containers: []
	W1213 10:39:21.741999  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:21.742004  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:21.742063  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:21.767443  947325 cri.go:89] found id: ""
	I1213 10:39:21.767458  947325 logs.go:282] 0 containers: []
	W1213 10:39:21.767464  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:21.767469  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:21.767526  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:21.792419  947325 cri.go:89] found id: ""
	I1213 10:39:21.792434  947325 logs.go:282] 0 containers: []
	W1213 10:39:21.792457  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:21.792463  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:21.792529  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:21.821837  947325 cri.go:89] found id: ""
	I1213 10:39:21.821851  947325 logs.go:282] 0 containers: []
	W1213 10:39:21.821859  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:21.821867  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:21.821878  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:21.836299  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:21.836315  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:21.902625  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:21.894040   15602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:21.894485   15602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:21.896277   15602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:21.897017   15602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:21.898534   15602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:21.894040   15602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:21.894485   15602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:21.896277   15602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:21.897017   15602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:21.898534   15602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:21.902635  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:39:21.902646  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:39:21.971184  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:39:21.971204  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:22.003828  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:22.003847  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:24.576083  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:24.587706  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:24.587784  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:24.613620  947325 cri.go:89] found id: ""
	I1213 10:39:24.613635  947325 logs.go:282] 0 containers: []
	W1213 10:39:24.613643  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:24.613648  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:39:24.613706  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:24.639792  947325 cri.go:89] found id: ""
	I1213 10:39:24.639807  947325 logs.go:282] 0 containers: []
	W1213 10:39:24.639814  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:39:24.639820  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:39:24.639897  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:24.664551  947325 cri.go:89] found id: ""
	I1213 10:39:24.664566  947325 logs.go:282] 0 containers: []
	W1213 10:39:24.664573  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:39:24.664578  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:24.664638  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:24.689748  947325 cri.go:89] found id: ""
	I1213 10:39:24.689762  947325 logs.go:282] 0 containers: []
	W1213 10:39:24.689769  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:24.689774  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:24.689831  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:24.718617  947325 cri.go:89] found id: ""
	I1213 10:39:24.718632  947325 logs.go:282] 0 containers: []
	W1213 10:39:24.718639  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:24.718645  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:24.718702  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:24.748026  947325 cri.go:89] found id: ""
	I1213 10:39:24.748040  947325 logs.go:282] 0 containers: []
	W1213 10:39:24.748047  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:24.748052  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:24.748117  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:24.774049  947325 cri.go:89] found id: ""
	I1213 10:39:24.774063  947325 logs.go:282] 0 containers: []
	W1213 10:39:24.774070  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:24.774084  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:24.774095  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:24.840008  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:24.840029  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:24.855570  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:24.855587  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:24.924254  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:24.915904   15706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:24.916383   15706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:24.918059   15706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:24.918622   15706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:24.920297   15706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:24.915904   15706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:24.916383   15706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:24.918059   15706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:24.918622   15706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:24.920297   15706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:24.924266  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:39:24.924276  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:39:24.993620  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:39:24.993639  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:27.529665  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:27.539536  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:27.539597  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:27.564505  947325 cri.go:89] found id: ""
	I1213 10:39:27.564519  947325 logs.go:282] 0 containers: []
	W1213 10:39:27.564526  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:27.564531  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:39:27.564591  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:27.590383  947325 cri.go:89] found id: ""
	I1213 10:39:27.590397  947325 logs.go:282] 0 containers: []
	W1213 10:39:27.590405  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:39:27.590410  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:39:27.590474  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:27.615895  947325 cri.go:89] found id: ""
	I1213 10:39:27.615909  947325 logs.go:282] 0 containers: []
	W1213 10:39:27.615916  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:39:27.615921  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:27.615979  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:27.647656  947325 cri.go:89] found id: ""
	I1213 10:39:27.647670  947325 logs.go:282] 0 containers: []
	W1213 10:39:27.647678  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:27.647683  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:27.647741  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:27.673365  947325 cri.go:89] found id: ""
	I1213 10:39:27.673379  947325 logs.go:282] 0 containers: []
	W1213 10:39:27.673385  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:27.673390  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:27.673448  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:27.698006  947325 cri.go:89] found id: ""
	I1213 10:39:27.698020  947325 logs.go:282] 0 containers: []
	W1213 10:39:27.698028  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:27.698033  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:27.698096  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:27.722664  947325 cri.go:89] found id: ""
	I1213 10:39:27.722688  947325 logs.go:282] 0 containers: []
	W1213 10:39:27.722695  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:27.722702  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:27.722713  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:27.793605  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:27.793629  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:27.808404  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:27.808420  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:27.875877  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:27.866856   15813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:27.867426   15813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:27.869149   15813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:27.869660   15813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:27.871392   15813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:27.866856   15813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:27.867426   15813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:27.869149   15813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:27.869660   15813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:27.871392   15813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:27.875886  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:39:27.875898  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:39:27.944703  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:39:27.944723  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:30.475788  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:30.486929  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:30.486993  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:30.523816  947325 cri.go:89] found id: ""
	I1213 10:39:30.523830  947325 logs.go:282] 0 containers: []
	W1213 10:39:30.523837  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:30.523843  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:39:30.523899  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:30.556573  947325 cri.go:89] found id: ""
	I1213 10:39:30.556586  947325 logs.go:282] 0 containers: []
	W1213 10:39:30.556593  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:39:30.556598  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:39:30.556666  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:30.581886  947325 cri.go:89] found id: ""
	I1213 10:39:30.581900  947325 logs.go:282] 0 containers: []
	W1213 10:39:30.581907  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:39:30.581912  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:30.581972  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:30.611853  947325 cri.go:89] found id: ""
	I1213 10:39:30.611878  947325 logs.go:282] 0 containers: []
	W1213 10:39:30.611886  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:30.611891  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:30.611959  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:30.636125  947325 cri.go:89] found id: ""
	I1213 10:39:30.636140  947325 logs.go:282] 0 containers: []
	W1213 10:39:30.636147  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:30.636152  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:30.636213  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:30.661404  947325 cri.go:89] found id: ""
	I1213 10:39:30.661418  947325 logs.go:282] 0 containers: []
	W1213 10:39:30.661425  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:30.661430  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:30.661490  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:30.686369  947325 cri.go:89] found id: ""
	I1213 10:39:30.686382  947325 logs.go:282] 0 containers: []
	W1213 10:39:30.686390  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:30.686397  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:30.686408  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:30.752100  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:30.752120  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:30.766471  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:30.766487  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:30.831347  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:30.823244   15917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:30.823892   15917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:30.825523   15917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:30.826100   15917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:30.827547   15917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:30.823244   15917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:30.823892   15917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:30.825523   15917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:30.826100   15917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:30.827547   15917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:30.831356  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:39:30.831367  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:39:30.899699  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:39:30.899718  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:33.428636  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:33.438752  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:33.438815  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:33.467201  947325 cri.go:89] found id: ""
	I1213 10:39:33.467215  947325 logs.go:282] 0 containers: []
	W1213 10:39:33.467222  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:33.467227  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:39:33.467285  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:33.496554  947325 cri.go:89] found id: ""
	I1213 10:39:33.496570  947325 logs.go:282] 0 containers: []
	W1213 10:39:33.496577  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:39:33.496582  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:39:33.496650  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:33.523431  947325 cri.go:89] found id: ""
	I1213 10:39:33.523446  947325 logs.go:282] 0 containers: []
	W1213 10:39:33.523453  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:39:33.523457  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:33.523517  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:33.559332  947325 cri.go:89] found id: ""
	I1213 10:39:33.559346  947325 logs.go:282] 0 containers: []
	W1213 10:39:33.559353  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:33.559358  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:33.559413  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:33.587632  947325 cri.go:89] found id: ""
	I1213 10:39:33.587645  947325 logs.go:282] 0 containers: []
	W1213 10:39:33.587653  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:33.587658  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:33.587714  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:33.612223  947325 cri.go:89] found id: ""
	I1213 10:39:33.612237  947325 logs.go:282] 0 containers: []
	W1213 10:39:33.612266  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:33.612271  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:33.612339  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:33.636321  947325 cri.go:89] found id: ""
	I1213 10:39:33.636344  947325 logs.go:282] 0 containers: []
	W1213 10:39:33.636351  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:33.636359  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:33.636373  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:33.650977  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:33.650993  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:33.710121  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:33.702522   16018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:33.703286   16018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:33.704577   16018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:33.705062   16018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:33.706497   16018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:33.702522   16018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:33.703286   16018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:33.704577   16018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:33.705062   16018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:33.706497   16018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:33.710132  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:39:33.710143  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:39:33.781081  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:39:33.781101  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:33.810866  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:33.810882  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:36.380753  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:36.390598  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:36.390659  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:36.416068  947325 cri.go:89] found id: ""
	I1213 10:39:36.416083  947325 logs.go:282] 0 containers: []
	W1213 10:39:36.416090  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:36.416097  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:39:36.416156  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:36.443939  947325 cri.go:89] found id: ""
	I1213 10:39:36.443954  947325 logs.go:282] 0 containers: []
	W1213 10:39:36.443968  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:39:36.443973  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:39:36.444031  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:36.468690  947325 cri.go:89] found id: ""
	I1213 10:39:36.468704  947325 logs.go:282] 0 containers: []
	W1213 10:39:36.468711  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:39:36.468716  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:36.468772  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:36.498941  947325 cri.go:89] found id: ""
	I1213 10:39:36.498955  947325 logs.go:282] 0 containers: []
	W1213 10:39:36.498962  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:36.498967  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:36.499033  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:36.538080  947325 cri.go:89] found id: ""
	I1213 10:39:36.538103  947325 logs.go:282] 0 containers: []
	W1213 10:39:36.538111  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:36.538116  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:36.538179  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:36.563132  947325 cri.go:89] found id: ""
	I1213 10:39:36.563147  947325 logs.go:282] 0 containers: []
	W1213 10:39:36.563154  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:36.563160  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:36.563217  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:36.588756  947325 cri.go:89] found id: ""
	I1213 10:39:36.588780  947325 logs.go:282] 0 containers: []
	W1213 10:39:36.588789  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:36.588797  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:36.588812  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:36.653330  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:36.653350  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:36.670404  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:36.670421  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:36.742327  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:36.732861   16126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:36.733828   16126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:36.734900   16126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:36.736542   16126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:36.737273   16126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:36.732861   16126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:36.733828   16126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:36.734900   16126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:36.736542   16126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:36.737273   16126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:36.742339  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:39:36.742350  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:39:36.811143  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:39:36.811163  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:39.339643  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:39.349836  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:39.349897  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:39.375161  947325 cri.go:89] found id: ""
	I1213 10:39:39.375175  947325 logs.go:282] 0 containers: []
	W1213 10:39:39.375194  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:39.375200  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:39:39.375262  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:39.400364  947325 cri.go:89] found id: ""
	I1213 10:39:39.400393  947325 logs.go:282] 0 containers: []
	W1213 10:39:39.400402  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:39:39.400407  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:39:39.400473  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:39.427167  947325 cri.go:89] found id: ""
	I1213 10:39:39.427182  947325 logs.go:282] 0 containers: []
	W1213 10:39:39.427189  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:39:39.427195  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:39.427270  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:39.455933  947325 cri.go:89] found id: ""
	I1213 10:39:39.455960  947325 logs.go:282] 0 containers: []
	W1213 10:39:39.455967  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:39.455973  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:39.456041  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:39.489827  947325 cri.go:89] found id: ""
	I1213 10:39:39.489840  947325 logs.go:282] 0 containers: []
	W1213 10:39:39.489847  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:39.489852  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:39.489920  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:39.529776  947325 cri.go:89] found id: ""
	I1213 10:39:39.529790  947325 logs.go:282] 0 containers: []
	W1213 10:39:39.529797  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:39.529814  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:39.529890  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:39.558531  947325 cri.go:89] found id: ""
	I1213 10:39:39.558545  947325 logs.go:282] 0 containers: []
	W1213 10:39:39.558552  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:39.558560  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:39.558571  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:39.625366  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:39.625384  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:39.640509  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:39.640525  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:39.706928  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:39.697596   16228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:39.698528   16228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:39.700141   16228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:39.700637   16228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:39.702492   16228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:39.697596   16228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:39.698528   16228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:39.700141   16228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:39.700637   16228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:39.702492   16228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:39.706940  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:39:39.706952  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:39:39.779211  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:39:39.779231  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:42.308782  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:42.319639  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:42.319702  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:42.346935  947325 cri.go:89] found id: ""
	I1213 10:39:42.346959  947325 logs.go:282] 0 containers: []
	W1213 10:39:42.346970  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:42.346977  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:39:42.347038  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:42.378296  947325 cri.go:89] found id: ""
	I1213 10:39:42.378310  947325 logs.go:282] 0 containers: []
	W1213 10:39:42.378316  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:39:42.378321  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:39:42.378381  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:42.403824  947325 cri.go:89] found id: ""
	I1213 10:39:42.403839  947325 logs.go:282] 0 containers: []
	W1213 10:39:42.403845  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:39:42.403850  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:42.403919  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:42.429874  947325 cri.go:89] found id: ""
	I1213 10:39:42.429890  947325 logs.go:282] 0 containers: []
	W1213 10:39:42.429898  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:42.429905  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:42.429978  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:42.457189  947325 cri.go:89] found id: ""
	I1213 10:39:42.457203  947325 logs.go:282] 0 containers: []
	W1213 10:39:42.457211  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:42.457216  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:42.457277  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:42.485373  947325 cri.go:89] found id: ""
	I1213 10:39:42.485389  947325 logs.go:282] 0 containers: []
	W1213 10:39:42.485400  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:42.485429  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:42.485500  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:42.518706  947325 cri.go:89] found id: ""
	I1213 10:39:42.518720  947325 logs.go:282] 0 containers: []
	W1213 10:39:42.518728  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:42.518735  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:42.518746  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:42.534645  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:42.534662  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:42.606481  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:42.598265   16332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:42.599171   16332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:42.600937   16332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:42.601258   16332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:42.602752   16332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:42.598265   16332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:42.599171   16332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:42.600937   16332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:42.601258   16332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:42.602752   16332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:42.606491  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:39:42.606501  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:39:42.673511  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:39:42.673532  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:42.702426  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:42.702443  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:45.267475  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:45.280530  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:45.280751  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:45.313332  947325 cri.go:89] found id: ""
	I1213 10:39:45.313346  947325 logs.go:282] 0 containers: []
	W1213 10:39:45.313354  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:45.313359  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:39:45.313427  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:45.342213  947325 cri.go:89] found id: ""
	I1213 10:39:45.342227  947325 logs.go:282] 0 containers: []
	W1213 10:39:45.342234  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:39:45.342239  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:39:45.342297  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:45.371108  947325 cri.go:89] found id: ""
	I1213 10:39:45.371123  947325 logs.go:282] 0 containers: []
	W1213 10:39:45.371130  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:39:45.371137  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:45.371197  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:45.400706  947325 cri.go:89] found id: ""
	I1213 10:39:45.400720  947325 logs.go:282] 0 containers: []
	W1213 10:39:45.400728  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:45.400735  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:45.400805  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:45.428233  947325 cri.go:89] found id: ""
	I1213 10:39:45.428258  947325 logs.go:282] 0 containers: []
	W1213 10:39:45.428266  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:45.428271  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:45.428341  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:45.458995  947325 cri.go:89] found id: ""
	I1213 10:39:45.459010  947325 logs.go:282] 0 containers: []
	W1213 10:39:45.459017  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:45.459023  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:45.459081  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:45.494206  947325 cri.go:89] found id: ""
	I1213 10:39:45.494220  947325 logs.go:282] 0 containers: []
	W1213 10:39:45.494227  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:45.494235  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:39:45.494246  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:39:45.575280  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:39:45.575299  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:45.605803  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:45.605820  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:45.676085  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:45.676104  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:45.691072  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:45.691091  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:45.756808  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:45.747515   16453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:45.748188   16453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:45.750879   16453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:45.751438   16453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:45.752940   16453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:45.747515   16453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:45.748188   16453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:45.750879   16453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:45.751438   16453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:45.752940   16453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:48.257078  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:48.266893  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:48.266954  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:48.292251  947325 cri.go:89] found id: ""
	I1213 10:39:48.292265  947325 logs.go:282] 0 containers: []
	W1213 10:39:48.292272  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:48.292288  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:39:48.292345  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:48.318109  947325 cri.go:89] found id: ""
	I1213 10:39:48.318134  947325 logs.go:282] 0 containers: []
	W1213 10:39:48.318142  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:39:48.318147  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:39:48.318207  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:48.344874  947325 cri.go:89] found id: ""
	I1213 10:39:48.344888  947325 logs.go:282] 0 containers: []
	W1213 10:39:48.344896  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:39:48.344901  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:48.344966  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:48.372878  947325 cri.go:89] found id: ""
	I1213 10:39:48.372893  947325 logs.go:282] 0 containers: []
	W1213 10:39:48.372900  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:48.372906  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:48.372967  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:48.399505  947325 cri.go:89] found id: ""
	I1213 10:39:48.399517  947325 logs.go:282] 0 containers: []
	W1213 10:39:48.399525  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:48.399530  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:48.399591  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:48.426096  947325 cri.go:89] found id: ""
	I1213 10:39:48.426110  947325 logs.go:282] 0 containers: []
	W1213 10:39:48.426117  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:48.426123  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:48.426182  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:48.452372  947325 cri.go:89] found id: ""
	I1213 10:39:48.452387  947325 logs.go:282] 0 containers: []
	W1213 10:39:48.452394  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:48.452402  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:39:48.452413  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:39:48.535530  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:39:48.535558  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:48.565498  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:48.565516  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:48.638609  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:48.638630  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:48.653725  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:48.653743  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:48.724088  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:48.715285   16557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:48.715997   16557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:48.717752   16557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:48.718374   16557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:48.719911   16557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:48.715285   16557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:48.715997   16557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:48.717752   16557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:48.718374   16557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:48.719911   16557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:51.224632  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:51.234995  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:51.235060  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:51.260920  947325 cri.go:89] found id: ""
	I1213 10:39:51.260934  947325 logs.go:282] 0 containers: []
	W1213 10:39:51.260941  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:51.260946  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:39:51.261010  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:51.288308  947325 cri.go:89] found id: ""
	I1213 10:39:51.288323  947325 logs.go:282] 0 containers: []
	W1213 10:39:51.288330  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:39:51.288335  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:39:51.288395  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:51.313237  947325 cri.go:89] found id: ""
	I1213 10:39:51.313251  947325 logs.go:282] 0 containers: []
	W1213 10:39:51.313258  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:39:51.313263  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:51.313322  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:51.340832  947325 cri.go:89] found id: ""
	I1213 10:39:51.340845  947325 logs.go:282] 0 containers: []
	W1213 10:39:51.340852  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:51.340857  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:51.340913  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:51.367975  947325 cri.go:89] found id: ""
	I1213 10:39:51.367989  947325 logs.go:282] 0 containers: []
	W1213 10:39:51.367996  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:51.368000  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:51.368059  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:51.393715  947325 cri.go:89] found id: ""
	I1213 10:39:51.393728  947325 logs.go:282] 0 containers: []
	W1213 10:39:51.393736  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:51.393741  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:51.393803  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:51.422317  947325 cri.go:89] found id: ""
	I1213 10:39:51.422331  947325 logs.go:282] 0 containers: []
	W1213 10:39:51.422338  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:51.422345  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:39:51.422356  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:39:51.492559  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:39:51.492577  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:51.531769  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:51.531786  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:51.599294  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:51.599316  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:51.615318  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:51.615334  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:51.678990  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:51.669927   16661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:51.670629   16661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:51.672315   16661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:51.672978   16661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:51.674480   16661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:51.669927   16661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:51.670629   16661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:51.672315   16661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:51.672978   16661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:51.674480   16661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:54.180647  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:54.190751  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:54.190817  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:54.216105  947325 cri.go:89] found id: ""
	I1213 10:39:54.216119  947325 logs.go:282] 0 containers: []
	W1213 10:39:54.216126  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:54.216131  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:39:54.216188  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:54.245934  947325 cri.go:89] found id: ""
	I1213 10:39:54.245948  947325 logs.go:282] 0 containers: []
	W1213 10:39:54.245955  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:39:54.245960  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:39:54.246019  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:54.272786  947325 cri.go:89] found id: ""
	I1213 10:39:54.272800  947325 logs.go:282] 0 containers: []
	W1213 10:39:54.272807  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:39:54.272812  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:54.272871  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:54.298724  947325 cri.go:89] found id: ""
	I1213 10:39:54.298738  947325 logs.go:282] 0 containers: []
	W1213 10:39:54.298745  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:54.298750  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:54.298814  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:54.324500  947325 cri.go:89] found id: ""
	I1213 10:39:54.324514  947325 logs.go:282] 0 containers: []
	W1213 10:39:54.324522  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:54.324533  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:54.324647  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:54.351350  947325 cri.go:89] found id: ""
	I1213 10:39:54.351364  947325 logs.go:282] 0 containers: []
	W1213 10:39:54.351372  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:54.351377  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:54.351439  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:54.376698  947325 cri.go:89] found id: ""
	I1213 10:39:54.376712  947325 logs.go:282] 0 containers: []
	W1213 10:39:54.376720  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:54.376729  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:39:54.376740  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:54.408737  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:54.408753  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:54.475785  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:54.475805  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:54.498578  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:54.498595  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:54.571508  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:54.562841   16763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:54.563554   16763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:54.565208   16763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:54.565888   16763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:54.567536   16763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:54.562841   16763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:54.563554   16763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:54.565208   16763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:54.565888   16763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:54.567536   16763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:54.571518  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:39:54.571529  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:39:57.141570  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:39:57.151660  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:39:57.151725  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:39:57.177208  947325 cri.go:89] found id: ""
	I1213 10:39:57.177222  947325 logs.go:282] 0 containers: []
	W1213 10:39:57.177230  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:39:57.177235  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:39:57.177305  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:39:57.202689  947325 cri.go:89] found id: ""
	I1213 10:39:57.202703  947325 logs.go:282] 0 containers: []
	W1213 10:39:57.202710  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:39:57.202715  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:39:57.202778  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:39:57.227567  947325 cri.go:89] found id: ""
	I1213 10:39:57.227581  947325 logs.go:282] 0 containers: []
	W1213 10:39:57.227588  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:39:57.227593  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:39:57.227651  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:39:57.257034  947325 cri.go:89] found id: ""
	I1213 10:39:57.257048  947325 logs.go:282] 0 containers: []
	W1213 10:39:57.257056  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:39:57.257061  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:39:57.257118  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:39:57.282238  947325 cri.go:89] found id: ""
	I1213 10:39:57.282251  947325 logs.go:282] 0 containers: []
	W1213 10:39:57.282258  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:39:57.282263  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:39:57.282321  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:39:57.308327  947325 cri.go:89] found id: ""
	I1213 10:39:57.308341  947325 logs.go:282] 0 containers: []
	W1213 10:39:57.308348  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:39:57.308353  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:39:57.308412  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:39:57.334174  947325 cri.go:89] found id: ""
	I1213 10:39:57.334188  947325 logs.go:282] 0 containers: []
	W1213 10:39:57.334196  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:39:57.334203  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:39:57.334214  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:39:57.365982  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:39:57.365997  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:39:57.438986  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:39:57.439007  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:39:57.454096  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:39:57.454113  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:39:57.539317  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:39:57.526904   16863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:57.527801   16863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:57.529755   16863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:57.530529   16863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:57.532282   16863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:39:57.526904   16863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:57.527801   16863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:57.529755   16863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:57.530529   16863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:39:57.532282   16863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:39:57.539330  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:39:57.539341  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:40:00.111211  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:40:00.161991  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:40:00.162066  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:40:00.278257  947325 cri.go:89] found id: ""
	I1213 10:40:00.278273  947325 logs.go:282] 0 containers: []
	W1213 10:40:00.278282  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:40:00.278288  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:40:00.278371  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:40:00.356421  947325 cri.go:89] found id: ""
	I1213 10:40:00.356441  947325 logs.go:282] 0 containers: []
	W1213 10:40:00.356449  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:40:00.356459  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:40:00.356542  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:40:00.426855  947325 cri.go:89] found id: ""
	I1213 10:40:00.426872  947325 logs.go:282] 0 containers: []
	W1213 10:40:00.426880  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:40:00.426887  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:40:00.426962  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:40:00.484844  947325 cri.go:89] found id: ""
	I1213 10:40:00.484860  947325 logs.go:282] 0 containers: []
	W1213 10:40:00.484868  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:40:00.484874  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:40:00.484945  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:40:00.602425  947325 cri.go:89] found id: ""
	I1213 10:40:00.602444  947325 logs.go:282] 0 containers: []
	W1213 10:40:00.602452  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:40:00.602465  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:40:00.602545  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:40:00.682272  947325 cri.go:89] found id: ""
	I1213 10:40:00.682288  947325 logs.go:282] 0 containers: []
	W1213 10:40:00.682297  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:40:00.682303  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:40:00.682377  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:40:00.717455  947325 cri.go:89] found id: ""
	I1213 10:40:00.717470  947325 logs.go:282] 0 containers: []
	W1213 10:40:00.717478  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:40:00.717486  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:40:00.717498  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:40:00.751785  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:40:00.751805  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:40:00.823234  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:40:00.823256  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:40:00.840067  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:40:00.840092  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:40:00.911938  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:40:00.902907   16969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:00.903639   16969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:00.905343   16969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:00.905895   16969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:00.907562   16969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:40:00.902907   16969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:00.903639   16969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:00.905343   16969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:00.905895   16969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:00.907562   16969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:40:00.911995  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:40:00.912005  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:40:03.480277  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:40:03.490777  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:40:03.490839  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:40:03.516535  947325 cri.go:89] found id: ""
	I1213 10:40:03.516549  947325 logs.go:282] 0 containers: []
	W1213 10:40:03.516556  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:40:03.516561  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:40:03.516630  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:40:03.543061  947325 cri.go:89] found id: ""
	I1213 10:40:03.543075  947325 logs.go:282] 0 containers: []
	W1213 10:40:03.543083  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:40:03.543088  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:40:03.543149  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:40:03.569136  947325 cri.go:89] found id: ""
	I1213 10:40:03.569150  947325 logs.go:282] 0 containers: []
	W1213 10:40:03.569158  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:40:03.569163  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:40:03.569222  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:40:03.596417  947325 cri.go:89] found id: ""
	I1213 10:40:03.596431  947325 logs.go:282] 0 containers: []
	W1213 10:40:03.596438  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:40:03.596443  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:40:03.596510  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:40:03.624475  947325 cri.go:89] found id: ""
	I1213 10:40:03.624489  947325 logs.go:282] 0 containers: []
	W1213 10:40:03.624496  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:40:03.624501  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:40:03.624560  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:40:03.650480  947325 cri.go:89] found id: ""
	I1213 10:40:03.650495  947325 logs.go:282] 0 containers: []
	W1213 10:40:03.650509  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:40:03.650515  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:40:03.650574  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:40:03.679244  947325 cri.go:89] found id: ""
	I1213 10:40:03.679258  947325 logs.go:282] 0 containers: []
	W1213 10:40:03.679265  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:40:03.679272  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:40:03.679283  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:40:03.752004  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:40:03.742428   17052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:03.743353   17052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:03.744776   17052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:03.745390   17052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:03.747857   17052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:40:03.742428   17052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:03.743353   17052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:03.744776   17052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:03.745390   17052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:03.747857   17052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:40:03.752014  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:40:03.752025  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:40:03.833866  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:40:03.833888  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:40:03.863364  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:40:03.863381  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:40:03.930202  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:40:03.930230  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:40:06.446850  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:40:06.456936  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:40:06.457005  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:40:06.481624  947325 cri.go:89] found id: ""
	I1213 10:40:06.481638  947325 logs.go:282] 0 containers: []
	W1213 10:40:06.481645  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:40:06.481653  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:40:06.481709  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:40:06.510312  947325 cri.go:89] found id: ""
	I1213 10:40:06.510335  947325 logs.go:282] 0 containers: []
	W1213 10:40:06.510342  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:40:06.510347  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:40:06.510408  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:40:06.541422  947325 cri.go:89] found id: ""
	I1213 10:40:06.541439  947325 logs.go:282] 0 containers: []
	W1213 10:40:06.541446  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:40:06.541451  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:40:06.541511  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:40:06.567745  947325 cri.go:89] found id: ""
	I1213 10:40:06.567759  947325 logs.go:282] 0 containers: []
	W1213 10:40:06.567766  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:40:06.567771  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:40:06.567827  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:40:06.593070  947325 cri.go:89] found id: ""
	I1213 10:40:06.593085  947325 logs.go:282] 0 containers: []
	W1213 10:40:06.593092  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:40:06.593097  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:40:06.593159  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:40:06.620092  947325 cri.go:89] found id: ""
	I1213 10:40:06.620106  947325 logs.go:282] 0 containers: []
	W1213 10:40:06.620114  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:40:06.620119  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:40:06.620180  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:40:06.646655  947325 cri.go:89] found id: ""
	I1213 10:40:06.646668  947325 logs.go:282] 0 containers: []
	W1213 10:40:06.646676  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:40:06.646684  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:40:06.646695  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:40:06.713111  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:40:06.713133  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:40:06.729687  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:40:06.729703  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:40:06.811226  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:40:06.802029   17169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:06.803349   17169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:06.804038   17169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:06.805655   17169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:06.806271   17169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:40:06.802029   17169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:06.803349   17169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:06.804038   17169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:06.805655   17169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:06.806271   17169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:40:06.811237  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:40:06.811252  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:40:06.879267  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:40:06.879290  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:40:09.408425  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:40:09.418903  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:40:09.418973  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:40:09.445864  947325 cri.go:89] found id: ""
	I1213 10:40:09.445878  947325 logs.go:282] 0 containers: []
	W1213 10:40:09.445886  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:40:09.445891  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:40:09.445953  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:40:09.477028  947325 cri.go:89] found id: ""
	I1213 10:40:09.477042  947325 logs.go:282] 0 containers: []
	W1213 10:40:09.477049  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:40:09.477054  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:40:09.477114  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:40:09.503739  947325 cri.go:89] found id: ""
	I1213 10:40:09.503754  947325 logs.go:282] 0 containers: []
	W1213 10:40:09.503761  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:40:09.503766  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:40:09.503830  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:40:09.530433  947325 cri.go:89] found id: ""
	I1213 10:40:09.530449  947325 logs.go:282] 0 containers: []
	W1213 10:40:09.530458  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:40:09.530463  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:40:09.530527  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:40:09.557391  947325 cri.go:89] found id: ""
	I1213 10:40:09.557406  947325 logs.go:282] 0 containers: []
	W1213 10:40:09.557413  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:40:09.557424  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:40:09.557488  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:40:09.583991  947325 cri.go:89] found id: ""
	I1213 10:40:09.584006  947325 logs.go:282] 0 containers: []
	W1213 10:40:09.584014  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:40:09.584020  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:40:09.584084  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:40:09.610671  947325 cri.go:89] found id: ""
	I1213 10:40:09.610685  947325 logs.go:282] 0 containers: []
	W1213 10:40:09.610692  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:40:09.610701  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:40:09.610712  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:40:09.626022  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:40:09.626039  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:40:09.693054  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:40:09.684419   17265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:09.685112   17265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:09.686796   17265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:09.687319   17265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:09.689067   17265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:40:09.684419   17265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:09.685112   17265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:09.686796   17265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:09.687319   17265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:40:09.689067   17265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:40:09.693064  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:40:09.693077  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:40:09.767666  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:40:09.767694  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:40:09.799935  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:40:09.799953  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:40:12.366822  947325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:40:12.377676  947325 kubeadm.go:602] duration metric: took 4m2.920144703s to restartPrimaryControlPlane
	W1213 10:40:12.377740  947325 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1213 10:40:12.377825  947325 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1213 10:40:12.791103  947325 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 10:40:12.803671  947325 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 10:40:12.811334  947325 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 10:40:12.811389  947325 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 10:40:12.818912  947325 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 10:40:12.818922  947325 kubeadm.go:158] found existing configuration files:
	
	I1213 10:40:12.818976  947325 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1213 10:40:12.826986  947325 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 10:40:12.827043  947325 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 10:40:12.834424  947325 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1213 10:40:12.842053  947325 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 10:40:12.842110  947325 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 10:40:12.849745  947325 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1213 10:40:12.857650  947325 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 10:40:12.857707  947325 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 10:40:12.865223  947325 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1213 10:40:12.873255  947325 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 10:40:12.873315  947325 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 10:40:12.881016  947325 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 10:40:12.922045  947325 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 10:40:12.922134  947325 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 10:40:13.007876  947325 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 10:40:13.007942  947325 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 10:40:13.007977  947325 kubeadm.go:319] OS: Linux
	I1213 10:40:13.008021  947325 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 10:40:13.008068  947325 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 10:40:13.008115  947325 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 10:40:13.008162  947325 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 10:40:13.008210  947325 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 10:40:13.008257  947325 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 10:40:13.008305  947325 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 10:40:13.008352  947325 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 10:40:13.008397  947325 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 10:40:13.081346  947325 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 10:40:13.081472  947325 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 10:40:13.081605  947325 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 10:40:13.089963  947325 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 10:40:13.093587  947325 out.go:252]   - Generating certificates and keys ...
	I1213 10:40:13.093699  947325 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 10:40:13.093775  947325 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 10:40:13.093883  947325 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 10:40:13.093964  947325 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1213 10:40:13.094047  947325 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 10:40:13.094113  947325 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1213 10:40:13.094188  947325 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1213 10:40:13.094255  947325 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1213 10:40:13.094334  947325 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 10:40:13.094412  947325 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 10:40:13.094451  947325 kubeadm.go:319] [certs] Using the existing "sa" key
	I1213 10:40:13.094511  947325 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 10:40:13.317953  947325 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 10:40:13.628016  947325 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 10:40:13.956341  947325 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 10:40:14.391056  947325 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 10:40:14.663244  947325 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 10:40:14.663900  947325 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 10:40:14.666642  947325 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 10:40:14.670022  947325 out.go:252]   - Booting up control plane ...
	I1213 10:40:14.670125  947325 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 10:40:14.670202  947325 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 10:40:14.670267  947325 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 10:40:14.685196  947325 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 10:40:14.685574  947325 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 10:40:14.692785  947325 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 10:40:14.693070  947325 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 10:40:14.693112  947325 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 10:40:14.837275  947325 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 10:40:14.837410  947325 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 10:44:14.836045  947325 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00023703s
	I1213 10:44:14.836071  947325 kubeadm.go:319] 
	I1213 10:44:14.836328  947325 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 10:44:14.836386  947325 kubeadm.go:319] 	- The kubelet is not running
	I1213 10:44:14.836565  947325 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 10:44:14.836573  947325 kubeadm.go:319] 
	I1213 10:44:14.836751  947325 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 10:44:14.837048  947325 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 10:44:14.837101  947325 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 10:44:14.837105  947325 kubeadm.go:319] 
	I1213 10:44:14.841975  947325 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 10:44:14.842445  947325 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 10:44:14.842565  947325 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 10:44:14.842818  947325 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1213 10:44:14.842823  947325 kubeadm.go:319] 
	I1213 10:44:14.842900  947325 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1213 10:44:14.842999  947325 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00023703s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1213 10:44:14.843084  947325 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1213 10:44:15.255135  947325 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 10:44:15.268065  947325 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 10:44:15.268119  947325 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 10:44:15.276039  947325 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 10:44:15.276049  947325 kubeadm.go:158] found existing configuration files:
	
	I1213 10:44:15.276099  947325 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1213 10:44:15.283960  947325 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 10:44:15.284017  947325 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 10:44:15.291479  947325 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1213 10:44:15.299068  947325 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 10:44:15.299125  947325 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 10:44:15.306780  947325 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1213 10:44:15.314429  947325 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 10:44:15.314486  947325 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 10:44:15.321813  947325 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1213 10:44:15.329258  947325 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 10:44:15.329313  947325 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 10:44:15.337109  947325 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 10:44:15.375292  947325 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 10:44:15.375341  947325 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 10:44:15.450506  947325 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 10:44:15.450577  947325 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 10:44:15.450617  947325 kubeadm.go:319] OS: Linux
	I1213 10:44:15.450661  947325 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 10:44:15.450708  947325 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 10:44:15.450754  947325 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 10:44:15.450800  947325 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 10:44:15.450849  947325 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 10:44:15.450900  947325 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 10:44:15.450944  947325 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 10:44:15.450990  947325 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 10:44:15.451035  947325 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 10:44:15.530795  947325 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 10:44:15.530912  947325 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 10:44:15.531008  947325 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 10:44:15.540322  947325 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 10:44:15.543642  947325 out.go:252]   - Generating certificates and keys ...
	I1213 10:44:15.543721  947325 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 10:44:15.543784  947325 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 10:44:15.543859  947325 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 10:44:15.543918  947325 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1213 10:44:15.543987  947325 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 10:44:15.544039  947325 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1213 10:44:15.544101  947325 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1213 10:44:15.544161  947325 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1213 10:44:15.544244  947325 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 10:44:15.544319  947325 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 10:44:15.544391  947325 kubeadm.go:319] [certs] Using the existing "sa" key
	I1213 10:44:15.544447  947325 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 10:44:15.880761  947325 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 10:44:16.054505  947325 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 10:44:16.157902  947325 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 10:44:16.328847  947325 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 10:44:16.490203  947325 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 10:44:16.491055  947325 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 10:44:16.493708  947325 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 10:44:16.496861  947325 out.go:252]   - Booting up control plane ...
	I1213 10:44:16.496957  947325 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 10:44:16.497033  947325 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 10:44:16.497100  947325 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 10:44:16.511097  947325 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 10:44:16.511202  947325 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 10:44:16.518811  947325 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 10:44:16.519350  947325 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 10:44:16.519584  947325 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 10:44:16.652368  947325 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 10:44:16.652480  947325 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 10:48:16.653403  947325 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001096364s
	I1213 10:48:16.653421  947325 kubeadm.go:319] 
	I1213 10:48:16.653477  947325 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 10:48:16.653510  947325 kubeadm.go:319] 	- The kubelet is not running
	I1213 10:48:16.653633  947325 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 10:48:16.653637  947325 kubeadm.go:319] 
	I1213 10:48:16.653740  947325 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 10:48:16.653771  947325 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 10:48:16.653801  947325 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 10:48:16.653804  947325 kubeadm.go:319] 
	I1213 10:48:16.659039  947325 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 10:48:16.659521  947325 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 10:48:16.659636  947325 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 10:48:16.659899  947325 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1213 10:48:16.659915  947325 kubeadm.go:319] 
	I1213 10:48:16.659983  947325 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1213 10:48:16.660039  947325 kubeadm.go:403] duration metric: took 12m7.242563635s to StartCluster
	I1213 10:48:16.660068  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:48:16.660127  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:48:16.684783  947325 cri.go:89] found id: ""
	I1213 10:48:16.684798  947325 logs.go:282] 0 containers: []
	W1213 10:48:16.684805  947325 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:48:16.684810  947325 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:48:16.684871  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:48:16.709976  947325 cri.go:89] found id: ""
	I1213 10:48:16.709990  947325 logs.go:282] 0 containers: []
	W1213 10:48:16.709997  947325 logs.go:284] No container was found matching "etcd"
	I1213 10:48:16.710001  947325 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:48:16.710060  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:48:16.735338  947325 cri.go:89] found id: ""
	I1213 10:48:16.735351  947325 logs.go:282] 0 containers: []
	W1213 10:48:16.735358  947325 logs.go:284] No container was found matching "coredns"
	I1213 10:48:16.735363  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:48:16.735422  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:48:16.760771  947325 cri.go:89] found id: ""
	I1213 10:48:16.760784  947325 logs.go:282] 0 containers: []
	W1213 10:48:16.760791  947325 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:48:16.760797  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:48:16.760851  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:48:16.785193  947325 cri.go:89] found id: ""
	I1213 10:48:16.785207  947325 logs.go:282] 0 containers: []
	W1213 10:48:16.785215  947325 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:48:16.785220  947325 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:48:16.785280  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:48:16.811008  947325 cri.go:89] found id: ""
	I1213 10:48:16.811022  947325 logs.go:282] 0 containers: []
	W1213 10:48:16.811029  947325 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:48:16.811034  947325 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:48:16.811093  947325 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:48:16.840077  947325 cri.go:89] found id: ""
	I1213 10:48:16.840092  947325 logs.go:282] 0 containers: []
	W1213 10:48:16.840099  947325 logs.go:284] No container was found matching "kindnet"
	I1213 10:48:16.840119  947325 logs.go:123] Gathering logs for kubelet ...
	I1213 10:48:16.840130  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:48:16.909363  947325 logs.go:123] Gathering logs for dmesg ...
	I1213 10:48:16.909386  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:48:16.924416  947325 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:48:16.924438  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:48:17.001976  947325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:48:16.991502   21073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:48:16.992681   21073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:48:16.993581   21073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:48:16.995339   21073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:48:16.995963   21073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:48:16.991502   21073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:48:16.992681   21073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:48:16.993581   21073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:48:16.995339   21073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:48:16.995963   21073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:48:17.001987  947325 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:48:17.001997  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:48:17.083059  947325 logs.go:123] Gathering logs for container status ...
	I1213 10:48:17.083078  947325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 10:48:17.113855  947325 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001096364s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1213 10:48:17.113886  947325 out.go:285] * 
	W1213 10:48:17.113944  947325 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001096364s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 10:48:17.113961  947325 out.go:285] * 
	W1213 10:48:17.116079  947325 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 10:48:17.121140  947325 out.go:203] 
	W1213 10:48:17.123914  947325 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001096364s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 10:48:17.123972  947325 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1213 10:48:17.123993  947325 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1213 10:48:17.128861  947325 out.go:203] 
	
	
	==> CRI-O <==
	Dec 13 10:36:08 functional-200955 crio[9915]: time="2025-12-13T10:36:08.290540792Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 13 10:36:08 functional-200955 crio[9915]: time="2025-12-13T10:36:08.290575401Z" level=info msg="Starting seccomp notifier watcher"
	Dec 13 10:36:08 functional-200955 crio[9915]: time="2025-12-13T10:36:08.290616288Z" level=info msg="Create NRI interface"
	Dec 13 10:36:08 functional-200955 crio[9915]: time="2025-12-13T10:36:08.291085281Z" level=info msg="built-in NRI default validator is disabled"
	Dec 13 10:36:08 functional-200955 crio[9915]: time="2025-12-13T10:36:08.291114401Z" level=info msg="runtime interface created"
	Dec 13 10:36:08 functional-200955 crio[9915]: time="2025-12-13T10:36:08.291129622Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 13 10:36:08 functional-200955 crio[9915]: time="2025-12-13T10:36:08.291142299Z" level=info msg="runtime interface starting up..."
	Dec 13 10:36:08 functional-200955 crio[9915]: time="2025-12-13T10:36:08.291148937Z" level=info msg="starting plugins..."
	Dec 13 10:36:08 functional-200955 crio[9915]: time="2025-12-13T10:36:08.291165938Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 13 10:36:08 functional-200955 crio[9915]: time="2025-12-13T10:36:08.291236782Z" level=info msg="No systemd watchdog enabled"
	Dec 13 10:36:08 functional-200955 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 13 10:40:13 functional-200955 crio[9915]: time="2025-12-13T10:40:13.084834397Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=b5efff79-46eb-41f2-bde4-db3ba9dab38c name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:40:13 functional-200955 crio[9915]: time="2025-12-13T10:40:13.08566844Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=5615dd29-1801-45cf-b9ec-bc2670925ce8 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:40:13 functional-200955 crio[9915]: time="2025-12-13T10:40:13.086277701Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=27142b95-3cc3-4adb-a2df-9868044a9998 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:40:13 functional-200955 crio[9915]: time="2025-12-13T10:40:13.086727642Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=22595c5f-3db5-4062-8e04-cb17f6bc794b name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:40:13 functional-200955 crio[9915]: time="2025-12-13T10:40:13.087217057Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=4772ea3e-d27c-4029-bb8e-c23e148a40e4 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:40:13 functional-200955 crio[9915]: time="2025-12-13T10:40:13.08768738Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=9219a2d6-ec51-448e-87c0-444e5d98b53a name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:40:13 functional-200955 crio[9915]: time="2025-12-13T10:40:13.088157391Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=a67c2fca-67f5-45c5-89da-71309b05610c name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:44:15 functional-200955 crio[9915]: time="2025-12-13T10:44:15.534115746Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=3049b4b3-14f8-431e-ab4d-c6efa4a37dac name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:44:15 functional-200955 crio[9915]: time="2025-12-13T10:44:15.535316398Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=aca0cbac-b5e4-4959-a768-b532f9c78063 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:44:15 functional-200955 crio[9915]: time="2025-12-13T10:44:15.53607634Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=4f458fd4-6b17-4cc2-8b0b-32f7a700d5d6 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:44:15 functional-200955 crio[9915]: time="2025-12-13T10:44:15.536719579Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=93897364-2c92-4299-ac1e-dfb20638840a name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:44:15 functional-200955 crio[9915]: time="2025-12-13T10:44:15.538084483Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=80c3abea-faad-48a9-8be1-ff63680847aa name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:44:15 functional-200955 crio[9915]: time="2025-12-13T10:44:15.538942002Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=9f779e3b-3069-476b-9013-f486002774b8 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:44:15 functional-200955 crio[9915]: time="2025-12-13T10:44:15.539437793Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=de504892-ee6f-46b3-8ac6-2712427d6188 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:50:08.092802   22548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:08.093742   22548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:08.094393   22548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:08.095569   22548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:08.096000   22548 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec13 10:10] kauditd_printk_skb: 8 callbacks suppressed
	[Dec13 10:11] overlayfs: idmapped layers are currently not supported
	[  +0.076161] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec13 10:17] overlayfs: idmapped layers are currently not supported
	[Dec13 10:18] overlayfs: idmapped layers are currently not supported
	[Dec13 10:35] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 10:50:08 up  5:32,  0 user,  load average: 0.19, 0.24, 0.50
	Linux functional-200955 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 10:50:05 functional-200955 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:50:06 functional-200955 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1106.
	Dec 13 10:50:06 functional-200955 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:50:06 functional-200955 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:50:06 functional-200955 kubelet[22437]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 10:50:06 functional-200955 kubelet[22437]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 10:50:06 functional-200955 kubelet[22437]: E1213 10:50:06.531432   22437 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:50:06 functional-200955 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:50:06 functional-200955 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:50:07 functional-200955 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1107.
	Dec 13 10:50:07 functional-200955 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:50:07 functional-200955 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:50:07 functional-200955 kubelet[22456]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 10:50:07 functional-200955 kubelet[22456]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 10:50:07 functional-200955 kubelet[22456]: E1213 10:50:07.215859   22456 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:50:07 functional-200955 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:50:07 functional-200955 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:50:07 functional-200955 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1108.
	Dec 13 10:50:07 functional-200955 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:50:07 functional-200955 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:50:08 functional-200955 kubelet[22538]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 10:50:08 functional-200955 kubelet[22538]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 10:50:08 functional-200955 kubelet[22538]: E1213 10:50:08.037183   22538 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:50:08 functional-200955 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:50:08 functional-200955 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-200955 -n functional-200955
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-200955 -n functional-200955: exit status 2 (360.600574ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-200955" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (2.34s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (241.69s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1213 10:48:35.371716  907484 retry.go:31] will retry after 3.520464617s: Temporary Error: Get "http://10.108.166.7": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
E1213 10:48:37.840650  907484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/addons-054604/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1213 10:48:48.893140  907484 retry.go:31] will retry after 6.193285284s: Temporary Error: Get "http://10.108.166.7": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1213 10:49:05.087259  907484 retry.go:31] will retry after 6.662316377s: Temporary Error: Get "http://10.108.166.7": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1213 10:49:21.750934  907484 retry.go:31] will retry after 9.239769641s: Temporary Error: Get "http://10.108.166.7": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1213 10:49:40.997128  907484 retry.go:31] will retry after 15.335163141s: Temporary Error: Get "http://10.108.166.7": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
E1213 10:50:25.726206  907484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-769798/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
E1213 10:51:40.920229  907484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/addons-054604/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_pvc_test.go:50: ***** TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: pod "integration-test=storage-provisioner" failed to start within 4m0s: context deadline exceeded ****
functional_test_pvc_test.go:50: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-200955 -n functional-200955
functional_test_pvc_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-200955 -n functional-200955: exit status 2 (328.800183ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
functional_test_pvc_test.go:50: status error: exit status 2 (may be ok)
functional_test_pvc_test.go:50: "functional-200955" apiserver is not running, skipping kubectl commands (state="Stopped")
functional_test_pvc_test.go:51: failed waiting for storage-provisioner: integration-test=storage-provisioner within 4m0s: context deadline exceeded
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-200955
helpers_test.go:244: (dbg) docker inspect functional-200955:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8d53cd00da87a9082532fc43ffe108cc46c100c4d52d598de1abc5c03d05f0a2",
	        "Created": "2025-12-13T10:21:24.063231347Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 935996,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T10:21:24.120776444Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/8d53cd00da87a9082532fc43ffe108cc46c100c4d52d598de1abc5c03d05f0a2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8d53cd00da87a9082532fc43ffe108cc46c100c4d52d598de1abc5c03d05f0a2/hostname",
	        "HostsPath": "/var/lib/docker/containers/8d53cd00da87a9082532fc43ffe108cc46c100c4d52d598de1abc5c03d05f0a2/hosts",
	        "LogPath": "/var/lib/docker/containers/8d53cd00da87a9082532fc43ffe108cc46c100c4d52d598de1abc5c03d05f0a2/8d53cd00da87a9082532fc43ffe108cc46c100c4d52d598de1abc5c03d05f0a2-json.log",
	        "Name": "/functional-200955",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-200955:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-200955",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8d53cd00da87a9082532fc43ffe108cc46c100c4d52d598de1abc5c03d05f0a2",
	                "LowerDir": "/var/lib/docker/overlay2/88eeb10fcc1b499e31bc1f6077feaba1c1c5b1a510066e3c5f3c7fab1032a7a8-init/diff:/var/lib/docker/overlay2/ae644fe0cc2841f5eea1cee1fab5fa62406b5368ff2c4f1e7ca42815e94a37ad/diff",
	                "MergedDir": "/var/lib/docker/overlay2/88eeb10fcc1b499e31bc1f6077feaba1c1c5b1a510066e3c5f3c7fab1032a7a8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/88eeb10fcc1b499e31bc1f6077feaba1c1c5b1a510066e3c5f3c7fab1032a7a8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/88eeb10fcc1b499e31bc1f6077feaba1c1c5b1a510066e3c5f3c7fab1032a7a8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-200955",
	                "Source": "/var/lib/docker/volumes/functional-200955/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-200955",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-200955",
	                "name.minikube.sigs.k8s.io": "functional-200955",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "766cddaf684c9eda3444b59c94594c94772112ec8d9beb3bf9ab0dee27a031f7",
	            "SandboxKey": "/var/run/docker/netns/766cddaf684c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33523"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33524"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33527"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33525"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33526"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-200955": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "26:41:8f:b5:13:ba",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "cc1684d1fcbfd40cf35af7d1687322fe1e1f6c4d0d51bbc510daab317bab57d4",
	                    "EndpointID": "480d7cd674d03dbe8a8b029c866cc993844939c5b39aa63c9b0d9188a61c29a3",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-200955",
	                        "8d53cd00da87"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-200955 -n functional-200955
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-200955 -n functional-200955: exit status 2 (321.222422ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-200955 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                           ARGS                                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-200955 image load --daemon kicbase/echo-server:functional-200955 --alsologtostderr                                                             │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:50 UTC │ 13 Dec 25 10:50 UTC │
	│ image          │ functional-200955 image ls                                                                                                                                │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:50 UTC │ 13 Dec 25 10:50 UTC │
	│ image          │ functional-200955 image save kicbase/echo-server:functional-200955 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:50 UTC │ 13 Dec 25 10:50 UTC │
	│ image          │ functional-200955 image rm kicbase/echo-server:functional-200955 --alsologtostderr                                                                        │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:50 UTC │ 13 Dec 25 10:50 UTC │
	│ image          │ functional-200955 image ls                                                                                                                                │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:50 UTC │ 13 Dec 25 10:50 UTC │
	│ image          │ functional-200955 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:50 UTC │ 13 Dec 25 10:50 UTC │
	│ image          │ functional-200955 image ls                                                                                                                                │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:50 UTC │ 13 Dec 25 10:50 UTC │
	│ image          │ functional-200955 image save --daemon kicbase/echo-server:functional-200955 --alsologtostderr                                                             │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:50 UTC │ 13 Dec 25 10:50 UTC │
	│ ssh            │ functional-200955 ssh sudo cat /etc/test/nested/copy/907484/hosts                                                                                         │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:50 UTC │ 13 Dec 25 10:50 UTC │
	│ ssh            │ functional-200955 ssh sudo cat /etc/ssl/certs/907484.pem                                                                                                  │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:50 UTC │ 13 Dec 25 10:50 UTC │
	│ ssh            │ functional-200955 ssh sudo cat /usr/share/ca-certificates/907484.pem                                                                                      │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:50 UTC │ 13 Dec 25 10:50 UTC │
	│ ssh            │ functional-200955 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                  │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:50 UTC │ 13 Dec 25 10:50 UTC │
	│ ssh            │ functional-200955 ssh sudo cat /etc/ssl/certs/9074842.pem                                                                                                 │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:50 UTC │ 13 Dec 25 10:50 UTC │
	│ ssh            │ functional-200955 ssh sudo cat /usr/share/ca-certificates/9074842.pem                                                                                     │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:50 UTC │ 13 Dec 25 10:50 UTC │
	│ ssh            │ functional-200955 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                  │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:50 UTC │ 13 Dec 25 10:50 UTC │
	│ image          │ functional-200955 image ls --format short --alsologtostderr                                                                                               │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:50 UTC │ 13 Dec 25 10:50 UTC │
	│ image          │ functional-200955 image ls --format json --alsologtostderr                                                                                                │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:50 UTC │ 13 Dec 25 10:50 UTC │
	│ image          │ functional-200955 image ls --format table --alsologtostderr                                                                                               │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:50 UTC │ 13 Dec 25 10:50 UTC │
	│ image          │ functional-200955 image ls --format yaml --alsologtostderr                                                                                                │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:50 UTC │ 13 Dec 25 10:50 UTC │
	│ ssh            │ functional-200955 ssh pgrep buildkitd                                                                                                                     │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:50 UTC │                     │
	│ image          │ functional-200955 image build -t localhost/my-image:functional-200955 testdata/build --alsologtostderr                                                    │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:50 UTC │ 13 Dec 25 10:50 UTC │
	│ image          │ functional-200955 image ls                                                                                                                                │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:50 UTC │ 13 Dec 25 10:50 UTC │
	│ update-context │ functional-200955 update-context --alsologtostderr -v=2                                                                                                   │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:50 UTC │ 13 Dec 25 10:50 UTC │
	│ update-context │ functional-200955 update-context --alsologtostderr -v=2                                                                                                   │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:50 UTC │ 13 Dec 25 10:50 UTC │
	│ update-context │ functional-200955 update-context --alsologtostderr -v=2                                                                                                   │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:50 UTC │ 13 Dec 25 10:50 UTC │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 10:50:23
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 10:50:23.051722  964543 out.go:360] Setting OutFile to fd 1 ...
	I1213 10:50:23.051865  964543 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:50:23.051877  964543 out.go:374] Setting ErrFile to fd 2...
	I1213 10:50:23.051883  964543 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:50:23.052137  964543 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-904040/.minikube/bin
	I1213 10:50:23.054351  964543 out.go:368] Setting JSON to false
	I1213 10:50:23.055400  964543 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":19972,"bootTime":1765603051,"procs":158,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1213 10:50:23.055509  964543 start.go:143] virtualization:  
	I1213 10:50:23.058700  964543 out.go:179] * [functional-200955] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 10:50:23.060861  964543 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 10:50:23.061000  964543 notify.go:221] Checking for updates...
	I1213 10:50:23.066718  964543 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 10:50:23.069526  964543 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22128-904040/kubeconfig
	I1213 10:50:23.072436  964543 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-904040/.minikube
	I1213 10:50:23.075311  964543 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 10:50:23.078155  964543 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 10:50:23.081486  964543 config.go:182] Loaded profile config "functional-200955": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 10:50:23.082214  964543 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 10:50:23.107165  964543 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 10:50:23.107352  964543 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:50:23.167099  964543 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 10:50:23.156424817 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:50:23.167226  964543 docker.go:319] overlay module found
	I1213 10:50:23.170224  964543 out.go:179] * Using the docker driver based on existing profile
	I1213 10:50:23.173134  964543 start.go:309] selected driver: docker
	I1213 10:50:23.173150  964543 start.go:927] validating driver "docker" against &{Name:functional-200955 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-200955 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:50:23.173256  964543 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 10:50:23.173374  964543 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:50:23.229761  964543 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 10:50:23.219915868 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:50:23.230225  964543 cni.go:84] Creating CNI manager for ""
	I1213 10:50:23.230291  964543 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 10:50:23.230341  964543 start.go:353] cluster config:
	{Name:functional-200955 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-200955 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:50:23.233884  964543 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Dec 13 10:44:15 functional-200955 crio[9915]: time="2025-12-13T10:44:15.534115746Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=3049b4b3-14f8-431e-ab4d-c6efa4a37dac name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:44:15 functional-200955 crio[9915]: time="2025-12-13T10:44:15.535316398Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=aca0cbac-b5e4-4959-a768-b532f9c78063 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:44:15 functional-200955 crio[9915]: time="2025-12-13T10:44:15.53607634Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=4f458fd4-6b17-4cc2-8b0b-32f7a700d5d6 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:44:15 functional-200955 crio[9915]: time="2025-12-13T10:44:15.536719579Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=93897364-2c92-4299-ac1e-dfb20638840a name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:44:15 functional-200955 crio[9915]: time="2025-12-13T10:44:15.538084483Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=80c3abea-faad-48a9-8be1-ff63680847aa name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:44:15 functional-200955 crio[9915]: time="2025-12-13T10:44:15.538942002Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=9f779e3b-3069-476b-9013-f486002774b8 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:44:15 functional-200955 crio[9915]: time="2025-12-13T10:44:15.539437793Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=de504892-ee6f-46b3-8ac6-2712427d6188 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:50:28 functional-200955 crio[9915]: time="2025-12-13T10:50:28.132806812Z" level=info msg="Checking image status: kicbase/echo-server:functional-200955" id=49e5f7ff-2567-4983-809e-74c5e6ef1e0d name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:50:28 functional-200955 crio[9915]: time="2025-12-13T10:50:28.133021658Z" level=info msg="Resolving \"kicbase/echo-server\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 13 10:50:28 functional-200955 crio[9915]: time="2025-12-13T10:50:28.133104604Z" level=info msg="Image kicbase/echo-server:functional-200955 not found" id=49e5f7ff-2567-4983-809e-74c5e6ef1e0d name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:50:28 functional-200955 crio[9915]: time="2025-12-13T10:50:28.13318618Z" level=info msg="Neither image nor artfiact kicbase/echo-server:functional-200955 found" id=49e5f7ff-2567-4983-809e-74c5e6ef1e0d name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:50:28 functional-200955 crio[9915]: time="2025-12-13T10:50:28.157336034Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-200955" id=c57728ae-b1de-4e96-850b-5e1b86c50dc3 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:50:28 functional-200955 crio[9915]: time="2025-12-13T10:50:28.157727816Z" level=info msg="Image docker.io/kicbase/echo-server:functional-200955 not found" id=c57728ae-b1de-4e96-850b-5e1b86c50dc3 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:50:28 functional-200955 crio[9915]: time="2025-12-13T10:50:28.157801434Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-200955 found" id=c57728ae-b1de-4e96-850b-5e1b86c50dc3 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:50:28 functional-200955 crio[9915]: time="2025-12-13T10:50:28.182324821Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-200955" id=434597a6-fe38-4390-bbec-8dde59aa6e9d name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:50:28 functional-200955 crio[9915]: time="2025-12-13T10:50:28.182508289Z" level=info msg="Image localhost/kicbase/echo-server:functional-200955 not found" id=434597a6-fe38-4390-bbec-8dde59aa6e9d name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:50:28 functional-200955 crio[9915]: time="2025-12-13T10:50:28.182570657Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-200955 found" id=434597a6-fe38-4390-bbec-8dde59aa6e9d name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:50:31 functional-200955 crio[9915]: time="2025-12-13T10:50:31.26648173Z" level=info msg="Checking image status: kicbase/echo-server:functional-200955" id=b0fbf4e6-66dc-4703-826a-fb275dadbe1c name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:50:31 functional-200955 crio[9915]: time="2025-12-13T10:50:31.266646597Z" level=info msg="Resolving \"kicbase/echo-server\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 13 10:50:31 functional-200955 crio[9915]: time="2025-12-13T10:50:31.266687713Z" level=info msg="Image kicbase/echo-server:functional-200955 not found" id=b0fbf4e6-66dc-4703-826a-fb275dadbe1c name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:50:31 functional-200955 crio[9915]: time="2025-12-13T10:50:31.266754077Z" level=info msg="Neither image nor artfiact kicbase/echo-server:functional-200955 found" id=b0fbf4e6-66dc-4703-826a-fb275dadbe1c name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:50:31 functional-200955 crio[9915]: time="2025-12-13T10:50:31.302345255Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-200955" id=0d024b12-5184-48b4-8b9e-5db8d520fc40 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:50:31 functional-200955 crio[9915]: time="2025-12-13T10:50:31.302520173Z" level=info msg="Image docker.io/kicbase/echo-server:functional-200955 not found" id=0d024b12-5184-48b4-8b9e-5db8d520fc40 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:50:31 functional-200955 crio[9915]: time="2025-12-13T10:50:31.302570078Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-200955 found" id=0d024b12-5184-48b4-8b9e-5db8d520fc40 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:50:31 functional-200955 crio[9915]: time="2025-12-13T10:50:31.327797349Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-200955" id=f09fa566-b342-465f-9df7-c673a06959fb name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:52:26.952619   25331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:26.953167   25331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:26.954769   25331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:26.955146   25331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:26.956617   25331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec13 10:10] kauditd_printk_skb: 8 callbacks suppressed
	[Dec13 10:11] overlayfs: idmapped layers are currently not supported
	[  +0.076161] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec13 10:17] overlayfs: idmapped layers are currently not supported
	[Dec13 10:18] overlayfs: idmapped layers are currently not supported
	[Dec13 10:35] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 10:52:27 up  5:34,  0 user,  load average: 0.21, 0.33, 0.50
	Linux functional-200955 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 10:52:24 functional-200955 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:52:25 functional-200955 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1291.
	Dec 13 10:52:25 functional-200955 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:52:25 functional-200955 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:52:25 functional-200955 kubelet[25207]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 10:52:25 functional-200955 kubelet[25207]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 10:52:25 functional-200955 kubelet[25207]: E1213 10:52:25.270364   25207 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:52:25 functional-200955 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:52:25 functional-200955 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:52:25 functional-200955 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1292.
	Dec 13 10:52:25 functional-200955 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:52:25 functional-200955 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:52:25 functional-200955 kubelet[25225]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 10:52:25 functional-200955 kubelet[25225]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 10:52:25 functional-200955 kubelet[25225]: E1213 10:52:25.982315   25225 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:52:25 functional-200955 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:52:25 functional-200955 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:52:26 functional-200955 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1293.
	Dec 13 10:52:26 functional-200955 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:52:26 functional-200955 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:52:26 functional-200955 kubelet[25288]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 10:52:26 functional-200955 kubelet[25288]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 10:52:26 functional-200955 kubelet[25288]: E1213 10:52:26.796304   25288 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:52:26 functional-200955 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:52:26 functional-200955 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-200955 -n functional-200955
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-200955 -n functional-200955: exit status 2 (318.729072ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-200955" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (241.69s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (1.45s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-200955 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:234: (dbg) Non-zero exit: kubectl --context functional-200955 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (68.596056ms)

                                                
                                                
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:236: failed to 'kubectl get nodes' with args "kubectl --context functional-200955 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:242: expected to have label "minikube.k8s.io/commit" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/version" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/name" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/primary" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-200955
helpers_test.go:244: (dbg) docker inspect functional-200955:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8d53cd00da87a9082532fc43ffe108cc46c100c4d52d598de1abc5c03d05f0a2",
	        "Created": "2025-12-13T10:21:24.063231347Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 935996,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T10:21:24.120776444Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/8d53cd00da87a9082532fc43ffe108cc46c100c4d52d598de1abc5c03d05f0a2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8d53cd00da87a9082532fc43ffe108cc46c100c4d52d598de1abc5c03d05f0a2/hostname",
	        "HostsPath": "/var/lib/docker/containers/8d53cd00da87a9082532fc43ffe108cc46c100c4d52d598de1abc5c03d05f0a2/hosts",
	        "LogPath": "/var/lib/docker/containers/8d53cd00da87a9082532fc43ffe108cc46c100c4d52d598de1abc5c03d05f0a2/8d53cd00da87a9082532fc43ffe108cc46c100c4d52d598de1abc5c03d05f0a2-json.log",
	        "Name": "/functional-200955",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-200955:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-200955",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8d53cd00da87a9082532fc43ffe108cc46c100c4d52d598de1abc5c03d05f0a2",
	                "LowerDir": "/var/lib/docker/overlay2/88eeb10fcc1b499e31bc1f6077feaba1c1c5b1a510066e3c5f3c7fab1032a7a8-init/diff:/var/lib/docker/overlay2/ae644fe0cc2841f5eea1cee1fab5fa62406b5368ff2c4f1e7ca42815e94a37ad/diff",
	                "MergedDir": "/var/lib/docker/overlay2/88eeb10fcc1b499e31bc1f6077feaba1c1c5b1a510066e3c5f3c7fab1032a7a8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/88eeb10fcc1b499e31bc1f6077feaba1c1c5b1a510066e3c5f3c7fab1032a7a8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/88eeb10fcc1b499e31bc1f6077feaba1c1c5b1a510066e3c5f3c7fab1032a7a8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-200955",
	                "Source": "/var/lib/docker/volumes/functional-200955/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-200955",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-200955",
	                "name.minikube.sigs.k8s.io": "functional-200955",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "766cddaf684c9eda3444b59c94594c94772112ec8d9beb3bf9ab0dee27a031f7",
	            "SandboxKey": "/var/run/docker/netns/766cddaf684c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33523"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33524"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33527"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33525"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33526"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-200955": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "26:41:8f:b5:13:ba",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "cc1684d1fcbfd40cf35af7d1687322fe1e1f6c4d0d51bbc510daab317bab57d4",
	                    "EndpointID": "480d7cd674d03dbe8a8b029c866cc993844939c5b39aa63c9b0d9188a61c29a3",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-200955",
	                        "8d53cd00da87"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-200955 -n functional-200955
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-200955 -n functional-200955: exit status 2 (332.363281ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-200955 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                                        ARGS                                                                         │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ service   │ functional-200955 service hello-node --url --format={{.IP}}                                                                                         │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:50 UTC │                     │
	│ service   │ functional-200955 service hello-node --url                                                                                                          │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:50 UTC │                     │
	│ mount     │ -p functional-200955 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo4252008234/001:/mount-9p --alsologtostderr -v=1              │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:50 UTC │                     │
	│ ssh       │ functional-200955 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:50 UTC │                     │
	│ ssh       │ functional-200955 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:50 UTC │ 13 Dec 25 10:50 UTC │
	│ ssh       │ functional-200955 ssh -- ls -la /mount-9p                                                                                                           │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:50 UTC │ 13 Dec 25 10:50 UTC │
	│ ssh       │ functional-200955 ssh cat /mount-9p/test-1765623013843816415                                                                                        │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:50 UTC │ 13 Dec 25 10:50 UTC │
	│ ssh       │ functional-200955 ssh mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates                                                                    │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:50 UTC │                     │
	│ ssh       │ functional-200955 ssh sudo umount -f /mount-9p                                                                                                      │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:50 UTC │ 13 Dec 25 10:50 UTC │
	│ mount     │ -p functional-200955 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2010494100/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:50 UTC │                     │
	│ ssh       │ functional-200955 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:50 UTC │                     │
	│ ssh       │ functional-200955 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:50 UTC │ 13 Dec 25 10:50 UTC │
	│ ssh       │ functional-200955 ssh -- ls -la /mount-9p                                                                                                           │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:50 UTC │ 13 Dec 25 10:50 UTC │
	│ ssh       │ functional-200955 ssh sudo umount -f /mount-9p                                                                                                      │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:50 UTC │                     │
	│ mount     │ -p functional-200955 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo608454427/001:/mount1 --alsologtostderr -v=1                 │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:50 UTC │                     │
	│ mount     │ -p functional-200955 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo608454427/001:/mount2 --alsologtostderr -v=1                 │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:50 UTC │                     │
	│ mount     │ -p functional-200955 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo608454427/001:/mount3 --alsologtostderr -v=1                 │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:50 UTC │                     │
	│ ssh       │ functional-200955 ssh findmnt -T /mount1                                                                                                            │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:50 UTC │ 13 Dec 25 10:50 UTC │
	│ ssh       │ functional-200955 ssh findmnt -T /mount2                                                                                                            │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:50 UTC │ 13 Dec 25 10:50 UTC │
	│ ssh       │ functional-200955 ssh findmnt -T /mount3                                                                                                            │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:50 UTC │ 13 Dec 25 10:50 UTC │
	│ mount     │ -p functional-200955 --kill=true                                                                                                                    │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:50 UTC │                     │
	│ start     │ -p functional-200955 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0       │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:50 UTC │                     │
	│ start     │ -p functional-200955 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0       │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:50 UTC │                     │
	│ start     │ -p functional-200955 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                 │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:50 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-200955 --alsologtostderr -v=1                                                                                      │ functional-200955 │ jenkins │ v1.37.0 │ 13 Dec 25 10:50 UTC │                     │
	└───────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 10:50:23
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 10:50:23.051722  964543 out.go:360] Setting OutFile to fd 1 ...
	I1213 10:50:23.051865  964543 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:50:23.051877  964543 out.go:374] Setting ErrFile to fd 2...
	I1213 10:50:23.051883  964543 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:50:23.052137  964543 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-904040/.minikube/bin
	I1213 10:50:23.054351  964543 out.go:368] Setting JSON to false
	I1213 10:50:23.055400  964543 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":19972,"bootTime":1765603051,"procs":158,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1213 10:50:23.055509  964543 start.go:143] virtualization:  
	I1213 10:50:23.058700  964543 out.go:179] * [functional-200955] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 10:50:23.060861  964543 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 10:50:23.061000  964543 notify.go:221] Checking for updates...
	I1213 10:50:23.066718  964543 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 10:50:23.069526  964543 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22128-904040/kubeconfig
	I1213 10:50:23.072436  964543 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-904040/.minikube
	I1213 10:50:23.075311  964543 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 10:50:23.078155  964543 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 10:50:23.081486  964543 config.go:182] Loaded profile config "functional-200955": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 10:50:23.082214  964543 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 10:50:23.107165  964543 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 10:50:23.107352  964543 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:50:23.167099  964543 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 10:50:23.156424817 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:50:23.167226  964543 docker.go:319] overlay module found
	I1213 10:50:23.170224  964543 out.go:179] * Using the docker driver based on existing profile
	I1213 10:50:23.173134  964543 start.go:309] selected driver: docker
	I1213 10:50:23.173150  964543 start.go:927] validating driver "docker" against &{Name:functional-200955 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-200955 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:50:23.173256  964543 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 10:50:23.173374  964543 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:50:23.229761  964543 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 10:50:23.219915868 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:50:23.230225  964543 cni.go:84] Creating CNI manager for ""
	I1213 10:50:23.230291  964543 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 10:50:23.230341  964543 start.go:353] cluster config:
	{Name:functional-200955 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-200955 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:50:23.233884  964543 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Dec 13 10:36:08 functional-200955 crio[9915]: time="2025-12-13T10:36:08.290540792Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 13 10:36:08 functional-200955 crio[9915]: time="2025-12-13T10:36:08.290575401Z" level=info msg="Starting seccomp notifier watcher"
	Dec 13 10:36:08 functional-200955 crio[9915]: time="2025-12-13T10:36:08.290616288Z" level=info msg="Create NRI interface"
	Dec 13 10:36:08 functional-200955 crio[9915]: time="2025-12-13T10:36:08.291085281Z" level=info msg="built-in NRI default validator is disabled"
	Dec 13 10:36:08 functional-200955 crio[9915]: time="2025-12-13T10:36:08.291114401Z" level=info msg="runtime interface created"
	Dec 13 10:36:08 functional-200955 crio[9915]: time="2025-12-13T10:36:08.291129622Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 13 10:36:08 functional-200955 crio[9915]: time="2025-12-13T10:36:08.291142299Z" level=info msg="runtime interface starting up..."
	Dec 13 10:36:08 functional-200955 crio[9915]: time="2025-12-13T10:36:08.291148937Z" level=info msg="starting plugins..."
	Dec 13 10:36:08 functional-200955 crio[9915]: time="2025-12-13T10:36:08.291165938Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 13 10:36:08 functional-200955 crio[9915]: time="2025-12-13T10:36:08.291236782Z" level=info msg="No systemd watchdog enabled"
	Dec 13 10:36:08 functional-200955 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 13 10:40:13 functional-200955 crio[9915]: time="2025-12-13T10:40:13.084834397Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=b5efff79-46eb-41f2-bde4-db3ba9dab38c name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:40:13 functional-200955 crio[9915]: time="2025-12-13T10:40:13.08566844Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=5615dd29-1801-45cf-b9ec-bc2670925ce8 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:40:13 functional-200955 crio[9915]: time="2025-12-13T10:40:13.086277701Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=27142b95-3cc3-4adb-a2df-9868044a9998 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:40:13 functional-200955 crio[9915]: time="2025-12-13T10:40:13.086727642Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=22595c5f-3db5-4062-8e04-cb17f6bc794b name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:40:13 functional-200955 crio[9915]: time="2025-12-13T10:40:13.087217057Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=4772ea3e-d27c-4029-bb8e-c23e148a40e4 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:40:13 functional-200955 crio[9915]: time="2025-12-13T10:40:13.08768738Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=9219a2d6-ec51-448e-87c0-444e5d98b53a name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:40:13 functional-200955 crio[9915]: time="2025-12-13T10:40:13.088157391Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=a67c2fca-67f5-45c5-89da-71309b05610c name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:44:15 functional-200955 crio[9915]: time="2025-12-13T10:44:15.534115746Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=3049b4b3-14f8-431e-ab4d-c6efa4a37dac name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:44:15 functional-200955 crio[9915]: time="2025-12-13T10:44:15.535316398Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=aca0cbac-b5e4-4959-a768-b532f9c78063 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:44:15 functional-200955 crio[9915]: time="2025-12-13T10:44:15.53607634Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=4f458fd4-6b17-4cc2-8b0b-32f7a700d5d6 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:44:15 functional-200955 crio[9915]: time="2025-12-13T10:44:15.536719579Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=93897364-2c92-4299-ac1e-dfb20638840a name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:44:15 functional-200955 crio[9915]: time="2025-12-13T10:44:15.538084483Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=80c3abea-faad-48a9-8be1-ff63680847aa name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:44:15 functional-200955 crio[9915]: time="2025-12-13T10:44:15.538942002Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=9f779e3b-3069-476b-9013-f486002774b8 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:44:15 functional-200955 crio[9915]: time="2025-12-13T10:44:15.539437793Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=de504892-ee6f-46b3-8ac6-2712427d6188 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:50:26.040780   23522 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:26.041420   23522 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:26.043092   23522 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:26.043767   23522 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:26.045366   23522 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec13 10:10] kauditd_printk_skb: 8 callbacks suppressed
	[Dec13 10:11] overlayfs: idmapped layers are currently not supported
	[  +0.076161] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec13 10:17] overlayfs: idmapped layers are currently not supported
	[Dec13 10:18] overlayfs: idmapped layers are currently not supported
	[Dec13 10:35] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 10:50:26 up  5:32,  0 user,  load average: 0.53, 0.31, 0.52
	Linux functional-200955 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 10:50:23 functional-200955 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:50:24 functional-200955 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1130.
	Dec 13 10:50:24 functional-200955 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:50:24 functional-200955 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:50:24 functional-200955 kubelet[23381]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 10:50:24 functional-200955 kubelet[23381]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 10:50:24 functional-200955 kubelet[23381]: E1213 10:50:24.539225   23381 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:50:24 functional-200955 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:50:24 functional-200955 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:50:25 functional-200955 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1131.
	Dec 13 10:50:25 functional-200955 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:50:25 functional-200955 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:50:25 functional-200955 kubelet[23426]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 10:50:25 functional-200955 kubelet[23426]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 10:50:25 functional-200955 kubelet[23426]: E1213 10:50:25.294195   23426 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:50:25 functional-200955 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:50:25 functional-200955 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:50:25 functional-200955 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1132.
	Dec 13 10:50:25 functional-200955 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:50:25 functional-200955 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:50:26 functional-200955 kubelet[23523]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 10:50:26 functional-200955 kubelet[23523]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 10:50:26 functional-200955 kubelet[23523]: E1213 10:50:26.041258   23523 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:50:26 functional-200955 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:50:26 functional-200955 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-200955 -n functional-200955
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-200955 -n functional-200955: exit status 2 (322.258545ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-200955" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (1.45s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.52s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-200955 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-200955 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 103. stderr: I1213 10:48:24.766444  960371 out.go:360] Setting OutFile to fd 1 ...
I1213 10:48:24.766654  960371 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 10:48:24.766682  960371 out.go:374] Setting ErrFile to fd 2...
I1213 10:48:24.766700  960371 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 10:48:24.767008  960371 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-904040/.minikube/bin
I1213 10:48:24.767322  960371 mustload.go:66] Loading cluster: functional-200955
I1213 10:48:24.767906  960371 config.go:182] Loaded profile config "functional-200955": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1213 10:48:24.769204  960371 cli_runner.go:164] Run: docker container inspect functional-200955 --format={{.State.Status}}
I1213 10:48:24.828173  960371 host.go:66] Checking if "functional-200955" exists ...
I1213 10:48:24.828593  960371 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1213 10:48:24.919645  960371 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 10:48:24.905009373 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1213 10:48:24.919779  960371 api_server.go:166] Checking apiserver status ...
I1213 10:48:24.919833  960371 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1213 10:48:24.919891  960371 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-200955
I1213 10:48:24.972396  960371 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33523 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/functional-200955/id_rsa Username:docker}
W1213 10:48:25.092748  960371 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1213 10:48:25.096251  960371 out.go:179] * The control-plane node functional-200955 apiserver is not running: (state=Stopped)
I1213 10:48:25.099306  960371 out.go:179]   To start a cluster, run: "minikube start -p functional-200955"

                                                
                                                
stdout: * The control-plane node functional-200955 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-200955"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-200955 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-200955 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-200955 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-200955 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 960370: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-200955 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-200955 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.52s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (0.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-200955 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:212: (dbg) Non-zero exit: kubectl --context functional-200955 apply -f testdata/testsvc.yaml: exit status 1 (149.640308ms)

                                                
                                                
** stderr ** 
	error: error validating "testdata/testsvc.yaml": error validating data: failed to download openapi: Get "https://192.168.49.2:8441/openapi/v2?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:214: kubectl --context functional-200955 apply -f testdata/testsvc.yaml failed: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (0.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (101.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:288: failed to hit nginx at "http://10.108.166.7": Temporary Error: Get "http://10.108.166.7": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-200955 get svc nginx-svc
functional_test_tunnel_test.go:290: (dbg) Non-zero exit: kubectl --context functional-200955 get svc nginx-svc: exit status 1 (58.682943ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:292: kubectl --context functional-200955 get svc nginx-svc failed: exit status 1
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (101.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-200955 create deployment hello-node --image kicbase/echo-server
functional_test.go:1451: (dbg) Non-zero exit: kubectl --context functional-200955 create deployment hello-node --image kicbase/echo-server: exit status 1 (58.274406ms)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://192.168.49.2:8441/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test.go:1453: failed to create hello-node deployment with this command "kubectl --context functional-200955 create deployment hello-node --image kicbase/echo-server": exit status 1.
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-200955 service list
functional_test.go:1469: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-200955 service list: exit status 103 (262.443394ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-200955 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-200955"

                                                
                                                
-- /stdout --
functional_test.go:1471: failed to do service list. args "out/minikube-linux-arm64 -p functional-200955 service list" : exit status 103
functional_test.go:1474: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-200955 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-200955\"\n"-
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-200955 service list -o json
functional_test.go:1499: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-200955 service list -o json: exit status 103 (263.720684ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-200955 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-200955"

                                                
                                                
-- /stdout --
functional_test.go:1501: failed to list services with json format. args "out/minikube-linux-arm64 -p functional-200955 service list -o json": exit status 103
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-200955 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-200955 service --namespace=default --https --url hello-node: exit status 103 (260.724771ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-200955 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-200955"

                                                
                                                
-- /stdout --
functional_test.go:1521: failed to get service url. args "out/minikube-linux-arm64 -p functional-200955 service --namespace=default --https --url hello-node" : exit status 103
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-200955 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-200955 service hello-node --url --format={{.IP}}: exit status 103 (268.21211ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-200955 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-200955"

                                                
                                                
-- /stdout --
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-arm64 -p functional-200955 service hello-node --url --format={{.IP}}": exit status 103
functional_test.go:1558: "* The control-plane node functional-200955 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-200955\"" is not a valid IP
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-200955 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-200955 service hello-node --url: exit status 103 (271.428324ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-200955 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-200955"

                                                
                                                
-- /stdout --
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-arm64 -p functional-200955 service hello-node --url": exit status 103
functional_test.go:1575: found endpoint for hello-node: * The control-plane node functional-200955 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-200955"
functional_test.go:1579: failed to parse "* The control-plane node functional-200955 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-200955\"": parse "* The control-plane node functional-200955 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-200955\"": net/url: invalid control character in URL
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (2.53s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-200955 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo4252008234/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765623013843816415" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo4252008234/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765623013843816415" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo4252008234/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765623013843816415" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo4252008234/001/test-1765623013843816415
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-200955 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-200955 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (393.246516ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1213 10:50:14.237333  907484 retry.go:31] will retry after 546.802053ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-200955 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-200955 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 13 10:50 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 13 10:50 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 13 10:50 test-1765623013843816415
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-200955 ssh cat /mount-9p/test-1765623013843816415
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-200955 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:148: (dbg) Non-zero exit: kubectl --context functional-200955 replace --force -f testdata/busybox-mount-test.yaml: exit status 1 (56.836312ms)

                                                
                                                
** stderr ** 
	error: error when deleting "testdata/busybox-mount-test.yaml": Delete "https://192.168.49.2:8441/api/v1/namespaces/default/pods/busybox-mount": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test_mount_test.go:150: failed to 'kubectl replace' for busybox-mount-test. args "kubectl --context functional-200955 replace --force -f testdata/busybox-mount-test.yaml" : exit status 1
functional_test_mount_test.go:80: "TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port" failed, getting debug info...
functional_test_mount_test.go:81: (dbg) Run:  out/minikube-linux-arm64 -p functional-200955 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates"
functional_test_mount_test.go:81: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-200955 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates": exit status 1 (269.598685ms)

                                                
                                                
-- stdout --
	192.168.49.1 on /mount-9p type 9p (rw,relatime,sync,dirsync,dfltuid=1000,dfltgid=997,access=any,msize=262144,trans=tcp,noextend,port=39825)
	total 2
	-rw-r--r-- 1 docker docker 24 Dec 13 10:50 created-by-test
	-rw-r--r-- 1 docker docker 24 Dec 13 10:50 created-by-test-removed-by-pod
	-rw-r--r-- 1 docker docker 24 Dec 13 10:50 test-1765623013843816415
	cat: /mount-9p/pod-dates: No such file or directory

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:83: debugging command "out/minikube-linux-arm64 -p functional-200955 ssh \"mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates\"" failed : exit status 1
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-200955 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-200955 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo4252008234/001:/mount-9p --alsologtostderr -v=1] ...
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-arm64 mount -p functional-200955 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo4252008234/001:/mount-9p --alsologtostderr -v=1] stdout:
* Mounting host path /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo4252008234/001 into VM as /mount-9p ...
- Mount type:   9p
- User ID:      docker
- Group ID:     docker
- Version:      9p2000.L
- Message Size: 262144
- Options:      map[]
- Bind Address: 192.168.49.1:39825
* Userspace file server: 
ufs starting
* Successfully mounted /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo4252008234/001 to /mount-9p

                                                
                                                
* NOTE: This process must stay alive for the mount to be accessible ...
* Unmounting /mount-9p ...

                                                
                                                

                                                
                                                
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-arm64 mount -p functional-200955 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo4252008234/001:/mount-9p --alsologtostderr -v=1] stderr:
I1213 10:50:13.901660  962672 out.go:360] Setting OutFile to fd 1 ...
I1213 10:50:13.902233  962672 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 10:50:13.902242  962672 out.go:374] Setting ErrFile to fd 2...
I1213 10:50:13.902250  962672 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 10:50:13.902500  962672 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-904040/.minikube/bin
I1213 10:50:13.902767  962672 mustload.go:66] Loading cluster: functional-200955
I1213 10:50:13.903122  962672 config.go:182] Loaded profile config "functional-200955": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1213 10:50:13.903692  962672 cli_runner.go:164] Run: docker container inspect functional-200955 --format={{.State.Status}}
I1213 10:50:13.922225  962672 host.go:66] Checking if "functional-200955" exists ...
I1213 10:50:13.922548  962672 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1213 10:50:14.035927  962672 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 10:50:14.023952804 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1213 10:50:14.036091  962672 cli_runner.go:164] Run: docker network inspect functional-200955 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1213 10:50:14.084619  962672 out.go:179] * Mounting host path /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo4252008234/001 into VM as /mount-9p ...
I1213 10:50:14.087721  962672 out.go:179]   - Mount type:   9p
I1213 10:50:14.090574  962672 out.go:179]   - User ID:      docker
I1213 10:50:14.093481  962672 out.go:179]   - Group ID:     docker
I1213 10:50:14.096292  962672 out.go:179]   - Version:      9p2000.L
I1213 10:50:14.099175  962672 out.go:179]   - Message Size: 262144
I1213 10:50:14.102086  962672 out.go:179]   - Options:      map[]
I1213 10:50:14.104903  962672 out.go:179]   - Bind Address: 192.168.49.1:39825
I1213 10:50:14.107813  962672 out.go:179] * Userspace file server: 
I1213 10:50:14.111213  962672 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f -l /mount-9p || echo "
I1213 10:50:14.111311  962672 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-200955
I1213 10:50:14.147596  962672 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33523 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/functional-200955/id_rsa Username:docker}
I1213 10:50:14.261580  962672 mount.go:180] unmount for /mount-9p ran successfully
I1213 10:50:14.261610  962672 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /mount-9p"
I1213 10:50:14.270240  962672 ssh_runner.go:195] Run: /bin/bash -c "sudo mount -t 9p -o dfltgid=$(grep ^docker: /etc/group | cut -d: -f3),dfltuid=$(id -u docker),msize=262144,port=39825,trans=tcp,version=9p2000.L 192.168.49.1 /mount-9p"
I1213 10:50:14.281317  962672 main.go:127] stdlog: ufs.go:141 connected
I1213 10:50:14.281487  962672 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:38314 Tversion tag 65535 msize 262144 version '9P2000.L'
I1213 10:50:14.281597  962672 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:38314 Rversion tag 65535 msize 262144 version '9P2000'
I1213 10:50:14.281845  962672 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:38314 Tattach tag 0 fid 0 afid 4294967295 uname 'nobody' nuname 0 aname ''
I1213 10:50:14.281906  962672 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:38314 Rattach tag 0 aqid (15c3d17 175521d1 'd')
I1213 10:50:14.282194  962672 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:38314 Tstat tag 0 fid 0
I1213 10:50:14.282249  962672 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:38314 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (15c3d17 175521d1 'd') m d775 at 0 mt 1765623013 l 4096 t 0 d 0 ext )
I1213 10:50:14.285394  962672 lock.go:50] WriteFile acquiring /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/.mount-process: {Name:mk27da51c6edc82d53b7ee437a59b64b9ebf0dc1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1213 10:50:14.285735  962672 mount.go:105] mount successful: ""
I1213 10:50:14.289188  962672 out.go:179] * Successfully mounted /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo4252008234/001 to /mount-9p
I1213 10:50:14.291926  962672 out.go:203] 
I1213 10:50:14.294838  962672 out.go:179] * NOTE: This process must stay alive for the mount to be accessible ...
I1213 10:50:15.341946  962672 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:38314 Tstat tag 0 fid 0
I1213 10:50:15.342029  962672 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:38314 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (15c3d17 175521d1 'd') m d775 at 0 mt 1765623013 l 4096 t 0 d 0 ext )
I1213 10:50:15.342372  962672 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:38314 Twalk tag 0 fid 0 newfid 1 
I1213 10:50:15.342410  962672 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:38314 Rwalk tag 0 
I1213 10:50:15.342526  962672 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:38314 Topen tag 0 fid 1 mode 0
I1213 10:50:15.342576  962672 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:38314 Ropen tag 0 qid (15c3d17 175521d1 'd') iounit 0
I1213 10:50:15.342724  962672 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:38314 Tstat tag 0 fid 0
I1213 10:50:15.342759  962672 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:38314 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (15c3d17 175521d1 'd') m d775 at 0 mt 1765623013 l 4096 t 0 d 0 ext )
I1213 10:50:15.342913  962672 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:38314 Tread tag 0 fid 1 offset 0 count 262120
I1213 10:50:15.343034  962672 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:38314 Rread tag 0 count 258
I1213 10:50:15.343183  962672 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:38314 Tread tag 0 fid 1 offset 258 count 261862
I1213 10:50:15.343207  962672 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:38314 Rread tag 0 count 0
I1213 10:50:15.343349  962672 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:38314 Tread tag 0 fid 1 offset 258 count 262120
I1213 10:50:15.343372  962672 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:38314 Rread tag 0 count 0
I1213 10:50:15.343528  962672 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:38314 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I1213 10:50:15.343566  962672 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:38314 Rwalk tag 0 (15c3d18 175521d1 '') 
I1213 10:50:15.343681  962672 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:38314 Tstat tag 0 fid 2
I1213 10:50:15.343714  962672 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:38314 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (15c3d18 175521d1 '') m 644 at 0 mt 1765623013 l 24 t 0 d 0 ext )
I1213 10:50:15.343863  962672 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:38314 Tstat tag 0 fid 2
I1213 10:50:15.343900  962672 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:38314 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (15c3d18 175521d1 '') m 644 at 0 mt 1765623013 l 24 t 0 d 0 ext )
I1213 10:50:15.344057  962672 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:38314 Tclunk tag 0 fid 2
I1213 10:50:15.344079  962672 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:38314 Rclunk tag 0
I1213 10:50:15.344217  962672 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:38314 Twalk tag 0 fid 0 newfid 2 0:'test-1765623013843816415' 
I1213 10:50:15.344247  962672 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:38314 Rwalk tag 0 (15c3d1a 175521d1 '') 
I1213 10:50:15.344411  962672 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:38314 Tstat tag 0 fid 2
I1213 10:50:15.344447  962672 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:38314 Rstat tag 0 st ('test-1765623013843816415' 'jenkins' 'jenkins' '' q (15c3d1a 175521d1 '') m 644 at 0 mt 1765623013 l 24 t 0 d 0 ext )
I1213 10:50:15.344592  962672 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:38314 Tstat tag 0 fid 2
I1213 10:50:15.344622  962672 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:38314 Rstat tag 0 st ('test-1765623013843816415' 'jenkins' 'jenkins' '' q (15c3d1a 175521d1 '') m 644 at 0 mt 1765623013 l 24 t 0 d 0 ext )
I1213 10:50:15.344731  962672 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:38314 Tclunk tag 0 fid 2
I1213 10:50:15.344750  962672 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:38314 Rclunk tag 0
I1213 10:50:15.344885  962672 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:38314 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I1213 10:50:15.344914  962672 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:38314 Rwalk tag 0 (15c3d19 175521d1 '') 
I1213 10:50:15.345020  962672 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:38314 Tstat tag 0 fid 2
I1213 10:50:15.345051  962672 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:38314 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (15c3d19 175521d1 '') m 644 at 0 mt 1765623013 l 24 t 0 d 0 ext )
I1213 10:50:15.345178  962672 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:38314 Tstat tag 0 fid 2
I1213 10:50:15.345207  962672 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:38314 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (15c3d19 175521d1 '') m 644 at 0 mt 1765623013 l 24 t 0 d 0 ext )
I1213 10:50:15.345329  962672 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:38314 Tclunk tag 0 fid 2
I1213 10:50:15.345353  962672 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:38314 Rclunk tag 0
I1213 10:50:15.345465  962672 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:38314 Tread tag 0 fid 1 offset 258 count 262120
I1213 10:50:15.345495  962672 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:38314 Rread tag 0 count 0
I1213 10:50:15.345660  962672 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:38314 Tclunk tag 0 fid 1
I1213 10:50:15.345689  962672 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:38314 Rclunk tag 0
I1213 10:50:15.633922  962672 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:38314 Twalk tag 0 fid 0 newfid 1 0:'test-1765623013843816415' 
I1213 10:50:15.633996  962672 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:38314 Rwalk tag 0 (15c3d1a 175521d1 '') 
I1213 10:50:15.634182  962672 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:38314 Tstat tag 0 fid 1
I1213 10:50:15.634242  962672 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:38314 Rstat tag 0 st ('test-1765623013843816415' 'jenkins' 'jenkins' '' q (15c3d1a 175521d1 '') m 644 at 0 mt 1765623013 l 24 t 0 d 0 ext )
I1213 10:50:15.634386  962672 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:38314 Twalk tag 0 fid 1 newfid 2 
I1213 10:50:15.634415  962672 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:38314 Rwalk tag 0 
I1213 10:50:15.634545  962672 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:38314 Topen tag 0 fid 2 mode 0
I1213 10:50:15.634598  962672 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:38314 Ropen tag 0 qid (15c3d1a 175521d1 '') iounit 0
I1213 10:50:15.634764  962672 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:38314 Tstat tag 0 fid 1
I1213 10:50:15.634800  962672 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:38314 Rstat tag 0 st ('test-1765623013843816415' 'jenkins' 'jenkins' '' q (15c3d1a 175521d1 '') m 644 at 0 mt 1765623013 l 24 t 0 d 0 ext )
I1213 10:50:15.634956  962672 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:38314 Tread tag 0 fid 2 offset 0 count 262120
I1213 10:50:15.634994  962672 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:38314 Rread tag 0 count 24
I1213 10:50:15.635116  962672 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:38314 Tread tag 0 fid 2 offset 24 count 262120
I1213 10:50:15.635146  962672 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:38314 Rread tag 0 count 0
I1213 10:50:15.635302  962672 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:38314 Tread tag 0 fid 2 offset 24 count 262120
I1213 10:50:15.635353  962672 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:38314 Rread tag 0 count 0
I1213 10:50:15.635585  962672 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:38314 Tclunk tag 0 fid 2
I1213 10:50:15.635625  962672 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:38314 Rclunk tag 0
I1213 10:50:15.635804  962672 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:38314 Tclunk tag 0 fid 1
I1213 10:50:15.635830  962672 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:38314 Rclunk tag 0
I1213 10:50:15.964489  962672 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:38314 Tstat tag 0 fid 0
I1213 10:50:15.964586  962672 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:38314 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (15c3d17 175521d1 'd') m d775 at 0 mt 1765623013 l 4096 t 0 d 0 ext )
I1213 10:50:15.964957  962672 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:38314 Twalk tag 0 fid 0 newfid 1 
I1213 10:50:15.964995  962672 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:38314 Rwalk tag 0 
I1213 10:50:15.965114  962672 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:38314 Topen tag 0 fid 1 mode 0
I1213 10:50:15.965163  962672 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:38314 Ropen tag 0 qid (15c3d17 175521d1 'd') iounit 0
I1213 10:50:15.965294  962672 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:38314 Tstat tag 0 fid 0
I1213 10:50:15.965332  962672 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:38314 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (15c3d17 175521d1 'd') m d775 at 0 mt 1765623013 l 4096 t 0 d 0 ext )
I1213 10:50:15.965481  962672 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:38314 Tread tag 0 fid 1 offset 0 count 262120
I1213 10:50:15.965603  962672 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:38314 Rread tag 0 count 258
I1213 10:50:15.965753  962672 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:38314 Tread tag 0 fid 1 offset 258 count 261862
I1213 10:50:15.965791  962672 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:38314 Rread tag 0 count 0
I1213 10:50:15.965915  962672 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:38314 Tread tag 0 fid 1 offset 258 count 262120
I1213 10:50:15.965942  962672 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:38314 Rread tag 0 count 0
I1213 10:50:15.966086  962672 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:38314 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I1213 10:50:15.966127  962672 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:38314 Rwalk tag 0 (15c3d18 175521d1 '') 
I1213 10:50:15.966284  962672 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:38314 Tstat tag 0 fid 2
I1213 10:50:15.966319  962672 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:38314 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (15c3d18 175521d1 '') m 644 at 0 mt 1765623013 l 24 t 0 d 0 ext )
I1213 10:50:15.966455  962672 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:38314 Tstat tag 0 fid 2
I1213 10:50:15.966491  962672 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:38314 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (15c3d18 175521d1 '') m 644 at 0 mt 1765623013 l 24 t 0 d 0 ext )
I1213 10:50:15.966615  962672 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:38314 Tclunk tag 0 fid 2
I1213 10:50:15.966638  962672 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:38314 Rclunk tag 0
I1213 10:50:15.966775  962672 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:38314 Twalk tag 0 fid 0 newfid 2 0:'test-1765623013843816415' 
I1213 10:50:15.966806  962672 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:38314 Rwalk tag 0 (15c3d1a 175521d1 '') 
I1213 10:50:15.966921  962672 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:38314 Tstat tag 0 fid 2
I1213 10:50:15.966982  962672 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:38314 Rstat tag 0 st ('test-1765623013843816415' 'jenkins' 'jenkins' '' q (15c3d1a 175521d1 '') m 644 at 0 mt 1765623013 l 24 t 0 d 0 ext )
I1213 10:50:15.967119  962672 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:38314 Tstat tag 0 fid 2
I1213 10:50:15.967154  962672 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:38314 Rstat tag 0 st ('test-1765623013843816415' 'jenkins' 'jenkins' '' q (15c3d1a 175521d1 '') m 644 at 0 mt 1765623013 l 24 t 0 d 0 ext )
I1213 10:50:15.967282  962672 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:38314 Tclunk tag 0 fid 2
I1213 10:50:15.967305  962672 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:38314 Rclunk tag 0
I1213 10:50:15.967445  962672 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:38314 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I1213 10:50:15.967481  962672 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:38314 Rwalk tag 0 (15c3d19 175521d1 '') 
I1213 10:50:15.967600  962672 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:38314 Tstat tag 0 fid 2
I1213 10:50:15.967631  962672 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:38314 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (15c3d19 175521d1 '') m 644 at 0 mt 1765623013 l 24 t 0 d 0 ext )
I1213 10:50:15.967775  962672 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:38314 Tstat tag 0 fid 2
I1213 10:50:15.967807  962672 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:38314 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (15c3d19 175521d1 '') m 644 at 0 mt 1765623013 l 24 t 0 d 0 ext )
I1213 10:50:15.967957  962672 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:38314 Tclunk tag 0 fid 2
I1213 10:50:15.967991  962672 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:38314 Rclunk tag 0
I1213 10:50:15.968143  962672 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:38314 Tread tag 0 fid 1 offset 258 count 262120
I1213 10:50:15.968171  962672 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:38314 Rread tag 0 count 0
I1213 10:50:15.968316  962672 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:38314 Tclunk tag 0 fid 1
I1213 10:50:15.968353  962672 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:38314 Rclunk tag 0
I1213 10:50:15.969585  962672 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:38314 Twalk tag 0 fid 0 newfid 1 0:'pod-dates' 
I1213 10:50:15.969648  962672 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:38314 Rerror tag 0 ename 'file not found' ecode 0
I1213 10:50:16.250702  962672 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:38314 Tclunk tag 0 fid 0
I1213 10:50:16.250757  962672 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:38314 Rclunk tag 0
I1213 10:50:16.251967  962672 main.go:127] stdlog: ufs.go:147 disconnected
I1213 10:50:16.274412  962672 out.go:179] * Unmounting /mount-9p ...
I1213 10:50:16.277418  962672 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f -l /mount-9p || echo "
I1213 10:50:16.289460  962672 mount.go:180] unmount for /mount-9p ran successfully
I1213 10:50:16.289647  962672 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/.mount-process: {Name:mk27da51c6edc82d53b7ee437a59b64b9ebf0dc1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1213 10:50:16.292902  962672 out.go:203] 
W1213 10:50:16.295902  962672 out.go:285] X Exiting due to MK_INTERRUPTED: Received terminated signal
X Exiting due to MK_INTERRUPTED: Received terminated signal
I1213 10:50:16.298779  962672 out.go:203] 
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (2.53s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (1.88s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-608785 --output=json --user=testUser
E1213 11:05:25.726354  907484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-769798/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p json-output-608785 --output=json --user=testUser: exit status 80 (1.878780546s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"f9341a14-1e3c-4ad9-9690-8bef84a7322d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-608785 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"10afdab1-3f94-4dde-9576-ccfba0a1a59a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-12-13T11:05:26Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"677c5fa3-3592-490f-ad30-29a09529aad7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 pause -p json-output-608785 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (1.88s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.46s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-608785 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 unpause -p json-output-608785 --output=json --user=testUser: exit status 80 (1.461675636s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"3c8e425a-9ff8-403d-bf42-ef3290965358","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-608785 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"1c3c11fe-6ffc-4463-b21f-6346e4528fb6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-12-13T11:05:28Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"fff693a0-f96f-42e5-b6a1-4d074700f526","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 unpause -p json-output-608785 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (1.46s)

                                                
                                    
x
+
TestKubernetesUpgrade (788.86s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-060355 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1213 11:23:25.221192  907484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-060355 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (37.946251464s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-060355
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-060355: (1.580169897s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-060355 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-060355 status --format={{.Host}}: exit status 7 (135.927989ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-060355 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1213 11:23:37.841736  907484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/addons-054604/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-060355 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 109 (12m22.565647798s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-060355] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22128
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22128-904040/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-904040/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "kubernetes-upgrade-060355" primary control-plane node in "kubernetes-upgrade-060355" cluster
	* Pulling base image v0.0.48-1765275396-22083 ...
	* Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 11:23:37.641009 1084269 out.go:360] Setting OutFile to fd 1 ...
	I1213 11:23:37.641121 1084269 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:23:37.641140 1084269 out.go:374] Setting ErrFile to fd 2...
	I1213 11:23:37.641155 1084269 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:23:37.641413 1084269 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-904040/.minikube/bin
	I1213 11:23:37.641784 1084269 out.go:368] Setting JSON to false
	I1213 11:23:37.642768 1084269 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":21967,"bootTime":1765603051,"procs":193,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1213 11:23:37.642863 1084269 start.go:143] virtualization:  
	I1213 11:23:37.646793 1084269 out.go:179] * [kubernetes-upgrade-060355] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 11:23:37.649751 1084269 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 11:23:37.649891 1084269 notify.go:221] Checking for updates...
	I1213 11:23:37.655355 1084269 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 11:23:37.658245 1084269 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22128-904040/kubeconfig
	I1213 11:23:37.661088 1084269 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-904040/.minikube
	I1213 11:23:37.663979 1084269 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 11:23:37.666772 1084269 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 11:23:37.670018 1084269 config.go:182] Loaded profile config "kubernetes-upgrade-060355": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1213 11:23:37.670570 1084269 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 11:23:37.749061 1084269 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 11:23:37.749246 1084269 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 11:23:37.862335 1084269 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 11:23:37.850013451 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 11:23:37.862450 1084269 docker.go:319] overlay module found
	I1213 11:23:37.865757 1084269 out.go:179] * Using the docker driver based on existing profile
	I1213 11:23:37.868601 1084269 start.go:309] selected driver: docker
	I1213 11:23:37.868632 1084269 start.go:927] validating driver "docker" against &{Name:kubernetes-upgrade-060355 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-060355 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:23:37.868737 1084269 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 11:23:37.869440 1084269 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 11:23:37.986387 1084269 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 11:23:37.96850124 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 11:23:37.986723 1084269 cni.go:84] Creating CNI manager for ""
	I1213 11:23:37.986777 1084269 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 11:23:37.986811 1084269 start.go:353] cluster config:
	{Name:kubernetes-upgrade-060355 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-060355 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain
:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:23:37.989998 1084269 out.go:179] * Starting "kubernetes-upgrade-060355" primary control-plane node in "kubernetes-upgrade-060355" cluster
	I1213 11:23:37.992873 1084269 cache.go:134] Beginning downloading kic base image for docker with crio
	I1213 11:23:37.995858 1084269 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 11:23:37.998791 1084269 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 11:23:37.998854 1084269 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22128-904040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1213 11:23:37.998865 1084269 cache.go:65] Caching tarball of preloaded images
	I1213 11:23:37.998958 1084269 preload.go:238] Found /home/jenkins/minikube-integration/22128-904040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1213 11:23:37.998969 1084269 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1213 11:23:37.999096 1084269 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/kubernetes-upgrade-060355/config.json ...
	I1213 11:23:37.999371 1084269 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 11:23:38.034797 1084269 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 11:23:38.034818 1084269 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 11:23:38.034834 1084269 cache.go:243] Successfully downloaded all kic artifacts
	I1213 11:23:38.034865 1084269 start.go:360] acquireMachinesLock for kubernetes-upgrade-060355: {Name:mkcf2d448ecbf0d80045f3a41387445cbd840696 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 11:23:38.034928 1084269 start.go:364] duration metric: took 42.462µs to acquireMachinesLock for "kubernetes-upgrade-060355"
	I1213 11:23:38.034957 1084269 start.go:96] Skipping create...Using existing machine configuration
	I1213 11:23:38.034963 1084269 fix.go:54] fixHost starting: 
	I1213 11:23:38.035271 1084269 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-060355 --format={{.State.Status}}
	I1213 11:23:38.068090 1084269 fix.go:112] recreateIfNeeded on kubernetes-upgrade-060355: state=Stopped err=<nil>
	W1213 11:23:38.068129 1084269 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 11:23:38.071935 1084269 out.go:252] * Restarting existing docker container for "kubernetes-upgrade-060355" ...
	I1213 11:23:38.072072 1084269 cli_runner.go:164] Run: docker start kubernetes-upgrade-060355
	I1213 11:23:38.484019 1084269 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-060355 --format={{.State.Status}}
	I1213 11:23:38.515489 1084269 kic.go:430] container "kubernetes-upgrade-060355" state is running.
	I1213 11:23:38.515885 1084269 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-060355
	I1213 11:23:38.551122 1084269 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/kubernetes-upgrade-060355/config.json ...
	I1213 11:23:38.551353 1084269 machine.go:94] provisionDockerMachine start ...
	I1213 11:23:38.551414 1084269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-060355
	I1213 11:23:38.573377 1084269 main.go:143] libmachine: Using SSH client type: native
	I1213 11:23:38.573781 1084269 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33748 <nil> <nil>}
	I1213 11:23:38.573799 1084269 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 11:23:38.574361 1084269 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:52814->127.0.0.1:33748: read: connection reset by peer
	I1213 11:23:41.749768 1084269 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-060355
	
	I1213 11:23:41.749837 1084269 ubuntu.go:182] provisioning hostname "kubernetes-upgrade-060355"
	I1213 11:23:41.749933 1084269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-060355
	I1213 11:23:41.772904 1084269 main.go:143] libmachine: Using SSH client type: native
	I1213 11:23:41.773231 1084269 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33748 <nil> <nil>}
	I1213 11:23:41.773243 1084269 main.go:143] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-060355 && echo "kubernetes-upgrade-060355" | sudo tee /etc/hostname
	I1213 11:23:41.956543 1084269 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-060355
	
	I1213 11:23:41.956623 1084269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-060355
	I1213 11:23:41.985182 1084269 main.go:143] libmachine: Using SSH client type: native
	I1213 11:23:41.985545 1084269 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33748 <nil> <nil>}
	I1213 11:23:41.985580 1084269 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-060355' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-060355/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-060355' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 11:23:42.150583 1084269 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 11:23:42.150727 1084269 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22128-904040/.minikube CaCertPath:/home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22128-904040/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22128-904040/.minikube}
	I1213 11:23:42.150776 1084269 ubuntu.go:190] setting up certificates
	I1213 11:23:42.150816 1084269 provision.go:84] configureAuth start
	I1213 11:23:42.150931 1084269 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-060355
	I1213 11:23:42.172362 1084269 provision.go:143] copyHostCerts
	I1213 11:23:42.172448 1084269 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-904040/.minikube/cert.pem, removing ...
	I1213 11:23:42.172463 1084269 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-904040/.minikube/cert.pem
	I1213 11:23:42.172567 1084269 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22128-904040/.minikube/cert.pem (1123 bytes)
	I1213 11:23:42.172675 1084269 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-904040/.minikube/key.pem, removing ...
	I1213 11:23:42.172680 1084269 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-904040/.minikube/key.pem
	I1213 11:23:42.172707 1084269 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22128-904040/.minikube/key.pem (1675 bytes)
	I1213 11:23:42.172764 1084269 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-904040/.minikube/ca.pem, removing ...
	I1213 11:23:42.172769 1084269 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-904040/.minikube/ca.pem
	I1213 11:23:42.172793 1084269 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22128-904040/.minikube/ca.pem (1082 bytes)
	I1213 11:23:42.172844 1084269 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22128-904040/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-060355 san=[127.0.0.1 192.168.76.2 kubernetes-upgrade-060355 localhost minikube]
	I1213 11:23:42.280591 1084269 provision.go:177] copyRemoteCerts
	I1213 11:23:42.280756 1084269 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 11:23:42.280829 1084269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-060355
	I1213 11:23:42.302053 1084269 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33748 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/kubernetes-upgrade-060355/id_rsa Username:docker}
	I1213 11:23:42.437482 1084269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 11:23:42.473284 1084269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1213 11:23:42.503686 1084269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 11:23:42.533334 1084269 provision.go:87] duration metric: took 382.489647ms to configureAuth
	I1213 11:23:42.533367 1084269 ubuntu.go:206] setting minikube options for container-runtime
	I1213 11:23:42.533592 1084269 config.go:182] Loaded profile config "kubernetes-upgrade-060355": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 11:23:42.533707 1084269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-060355
	I1213 11:23:42.568556 1084269 main.go:143] libmachine: Using SSH client type: native
	I1213 11:23:42.568869 1084269 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33748 <nil> <nil>}
	I1213 11:23:42.568895 1084269 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 11:23:43.038658 1084269 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 11:23:43.038742 1084269 machine.go:97] duration metric: took 4.487378827s to provisionDockerMachine
	I1213 11:23:43.038768 1084269 start.go:293] postStartSetup for "kubernetes-upgrade-060355" (driver="docker")
	I1213 11:23:43.038805 1084269 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 11:23:43.038911 1084269 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 11:23:43.038992 1084269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-060355
	I1213 11:23:43.062422 1084269 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33748 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/kubernetes-upgrade-060355/id_rsa Username:docker}
	I1213 11:23:43.178298 1084269 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 11:23:43.185897 1084269 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 11:23:43.185928 1084269 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 11:23:43.185940 1084269 filesync.go:126] Scanning /home/jenkins/minikube-integration/22128-904040/.minikube/addons for local assets ...
	I1213 11:23:43.185998 1084269 filesync.go:126] Scanning /home/jenkins/minikube-integration/22128-904040/.minikube/files for local assets ...
	I1213 11:23:43.186074 1084269 filesync.go:149] local asset: /home/jenkins/minikube-integration/22128-904040/.minikube/files/etc/ssl/certs/9074842.pem -> 9074842.pem in /etc/ssl/certs
	I1213 11:23:43.186173 1084269 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 11:23:43.199529 1084269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/files/etc/ssl/certs/9074842.pem --> /etc/ssl/certs/9074842.pem (1708 bytes)
	I1213 11:23:43.238952 1084269 start.go:296] duration metric: took 200.145142ms for postStartSetup
	I1213 11:23:43.239078 1084269 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 11:23:43.239147 1084269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-060355
	I1213 11:23:43.271141 1084269 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33748 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/kubernetes-upgrade-060355/id_rsa Username:docker}
	I1213 11:23:43.395556 1084269 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 11:23:43.403080 1084269 fix.go:56] duration metric: took 5.368110175s for fixHost
	I1213 11:23:43.403105 1084269 start.go:83] releasing machines lock for "kubernetes-upgrade-060355", held for 5.368166938s
	I1213 11:23:43.403179 1084269 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-060355
	I1213 11:23:43.435211 1084269 ssh_runner.go:195] Run: cat /version.json
	I1213 11:23:43.435267 1084269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-060355
	I1213 11:23:43.435512 1084269 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 11:23:43.435562 1084269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-060355
	I1213 11:23:43.464967 1084269 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33748 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/kubernetes-upgrade-060355/id_rsa Username:docker}
	I1213 11:23:43.486249 1084269 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33748 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/kubernetes-upgrade-060355/id_rsa Username:docker}
	I1213 11:23:43.590322 1084269 ssh_runner.go:195] Run: systemctl --version
	I1213 11:23:43.722615 1084269 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 11:23:43.792810 1084269 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 11:23:43.798173 1084269 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 11:23:43.798267 1084269 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 11:23:43.814879 1084269 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 11:23:43.814906 1084269 start.go:496] detecting cgroup driver to use...
	I1213 11:23:43.814953 1084269 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 11:23:43.815032 1084269 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 11:23:43.838785 1084269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 11:23:43.859840 1084269 docker.go:218] disabling cri-docker service (if available) ...
	I1213 11:23:43.859917 1084269 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 11:23:43.884837 1084269 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 11:23:43.901285 1084269 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 11:23:44.096071 1084269 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 11:23:44.369349 1084269 docker.go:234] disabling docker service ...
	I1213 11:23:44.369417 1084269 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 11:23:44.391530 1084269 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 11:23:44.408038 1084269 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 11:23:44.616006 1084269 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 11:23:44.858306 1084269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 11:23:44.873441 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 11:23:44.903499 1084269 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 11:23:44.903566 1084269 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:23:44.923966 1084269 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 11:23:44.924036 1084269 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:23:44.934779 1084269 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:23:44.950367 1084269 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:23:44.961953 1084269 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 11:23:44.972394 1084269 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:23:44.982732 1084269 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:23:44.997254 1084269 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:23:45.017706 1084269 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 11:23:45.032398 1084269 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 11:23:45.046771 1084269 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:23:45.282767 1084269 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 11:23:45.489446 1084269 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 11:23:45.489531 1084269 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 11:23:45.494304 1084269 start.go:564] Will wait 60s for crictl version
	I1213 11:23:45.494426 1084269 ssh_runner.go:195] Run: which crictl
	I1213 11:23:45.499822 1084269 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 11:23:45.558394 1084269 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1213 11:23:45.558556 1084269 ssh_runner.go:195] Run: crio --version
	I1213 11:23:45.608962 1084269 ssh_runner.go:195] Run: crio --version
	I1213 11:23:45.655625 1084269 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1213 11:23:45.658588 1084269 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-060355 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 11:23:45.681666 1084269 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1213 11:23:45.685624 1084269 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 11:23:45.700065 1084269 kubeadm.go:884] updating cluster {Name:kubernetes-upgrade-060355 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-060355 Namespace:default APIServerHAVIP: APISe
rverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 11:23:45.700200 1084269 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 11:23:45.700268 1084269 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 11:23:45.744121 1084269 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
	I1213 11:23:45.744197 1084269 ssh_runner.go:195] Run: which lz4
	I1213 11:23:45.748401 1084269 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1213 11:23:45.752120 1084269 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1213 11:23:45.752157 1084269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 --> /preloaded.tar.lz4 (306100841 bytes)
	I1213 11:23:47.273642 1084269 crio.go:462] duration metric: took 1.525288213s to copy over tarball
	I1213 11:23:47.273716 1084269 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1213 11:23:49.647316 1084269 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.373570013s)
	I1213 11:23:49.647344 1084269 crio.go:469] duration metric: took 2.373675482s to extract the tarball
	I1213 11:23:49.647352 1084269 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1213 11:23:49.716197 1084269 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 11:23:49.748185 1084269 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 11:23:49.748212 1084269 cache_images.go:86] Images are preloaded, skipping loading
	I1213 11:23:49.748220 1084269 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 crio true true} ...
	I1213 11:23:49.748342 1084269 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=kubernetes-upgrade-060355 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-060355 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 11:23:49.748444 1084269 ssh_runner.go:195] Run: crio config
	I1213 11:23:49.807073 1084269 cni.go:84] Creating CNI manager for ""
	I1213 11:23:49.807099 1084269 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 11:23:49.807115 1084269 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 11:23:49.807139 1084269 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-060355 NodeName:kubernetes-upgrade-060355 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca
.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 11:23:49.807266 1084269 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-060355"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 11:23:49.807342 1084269 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 11:23:49.815889 1084269 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 11:23:49.815962 1084269 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 11:23:49.823934 1084269 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I1213 11:23:49.837444 1084269 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 11:23:49.852373 1084269 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2229 bytes)
	I1213 11:23:49.867016 1084269 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1213 11:23:49.871291 1084269 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 11:23:49.884031 1084269 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:23:50.012415 1084269 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 11:23:50.030467 1084269 certs.go:69] Setting up /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/kubernetes-upgrade-060355 for IP: 192.168.76.2
	I1213 11:23:50.030532 1084269 certs.go:195] generating shared ca certs ...
	I1213 11:23:50.030565 1084269 certs.go:227] acquiring lock for ca certs: {Name:mk8a4f8a0a31c02fdf751ce601bdbbea6f5a03e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:23:50.030754 1084269 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22128-904040/.minikube/ca.key
	I1213 11:23:50.030830 1084269 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22128-904040/.minikube/proxy-client-ca.key
	I1213 11:23:50.030869 1084269 certs.go:257] generating profile certs ...
	I1213 11:23:50.031033 1084269 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/kubernetes-upgrade-060355/client.key
	I1213 11:23:50.031153 1084269 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/kubernetes-upgrade-060355/apiserver.key.9abc0a72
	I1213 11:23:50.031226 1084269 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/kubernetes-upgrade-060355/proxy-client.key
	I1213 11:23:50.031388 1084269 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/907484.pem (1338 bytes)
	W1213 11:23:50.031455 1084269 certs.go:480] ignoring /home/jenkins/minikube-integration/22128-904040/.minikube/certs/907484_empty.pem, impossibly tiny 0 bytes
	I1213 11:23:50.031480 1084269 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 11:23:50.031545 1084269 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca.pem (1082 bytes)
	I1213 11:23:50.031630 1084269 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/cert.pem (1123 bytes)
	I1213 11:23:50.031680 1084269 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/key.pem (1675 bytes)
	I1213 11:23:50.031769 1084269 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-904040/.minikube/files/etc/ssl/certs/9074842.pem (1708 bytes)
	I1213 11:23:50.032744 1084269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 11:23:50.061413 1084269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 11:23:50.086971 1084269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 11:23:50.113387 1084269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 11:23:50.135880 1084269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/kubernetes-upgrade-060355/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1213 11:23:50.154650 1084269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/kubernetes-upgrade-060355/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 11:23:50.172783 1084269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/kubernetes-upgrade-060355/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 11:23:50.194446 1084269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/kubernetes-upgrade-060355/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 11:23:50.214812 1084269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/certs/907484.pem --> /usr/share/ca-certificates/907484.pem (1338 bytes)
	I1213 11:23:50.233662 1084269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/files/etc/ssl/certs/9074842.pem --> /usr/share/ca-certificates/9074842.pem (1708 bytes)
	I1213 11:23:50.254320 1084269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 11:23:50.273644 1084269 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 11:23:50.287391 1084269 ssh_runner.go:195] Run: openssl version
	I1213 11:23:50.293831 1084269 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/907484.pem
	I1213 11:23:50.306890 1084269 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/907484.pem /etc/ssl/certs/907484.pem
	I1213 11:23:50.314812 1084269 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/907484.pem
	I1213 11:23:50.318890 1084269 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 10:21 /usr/share/ca-certificates/907484.pem
	I1213 11:23:50.318985 1084269 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/907484.pem
	I1213 11:23:50.360706 1084269 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 11:23:50.368487 1084269 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9074842.pem
	I1213 11:23:50.376054 1084269 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9074842.pem /etc/ssl/certs/9074842.pem
	I1213 11:23:50.384185 1084269 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9074842.pem
	I1213 11:23:50.388191 1084269 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 10:21 /usr/share/ca-certificates/9074842.pem
	I1213 11:23:50.388279 1084269 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9074842.pem
	I1213 11:23:50.429937 1084269 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 11:23:50.439084 1084269 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:23:50.446995 1084269 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 11:23:50.455290 1084269 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:23:50.459317 1084269 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 10:11 /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:23:50.459419 1084269 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:23:50.501282 1084269 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 11:23:50.509074 1084269 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 11:23:50.513030 1084269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 11:23:50.555890 1084269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 11:23:50.597399 1084269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 11:23:50.638305 1084269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 11:23:50.689820 1084269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 11:23:50.731604 1084269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 11:23:50.773446 1084269 kubeadm.go:401] StartCluster: {Name:kubernetes-upgrade-060355 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-060355 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:23:50.773609 1084269 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 11:23:50.773712 1084269 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 11:23:50.802689 1084269 cri.go:89] found id: ""
	I1213 11:23:50.802756 1084269 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 11:23:50.810697 1084269 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 11:23:50.810766 1084269 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 11:23:50.810827 1084269 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 11:23:50.818223 1084269 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 11:23:50.818641 1084269 kubeconfig.go:47] verify endpoint returned: get endpoint: "kubernetes-upgrade-060355" does not appear in /home/jenkins/minikube-integration/22128-904040/kubeconfig
	I1213 11:23:50.818751 1084269 kubeconfig.go:62] /home/jenkins/minikube-integration/22128-904040/kubeconfig needs updating (will repair): [kubeconfig missing "kubernetes-upgrade-060355" cluster setting kubeconfig missing "kubernetes-upgrade-060355" context setting]
	I1213 11:23:50.819031 1084269 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-904040/kubeconfig: {Name:mk623f80012ba74b924bdfcf4e2ec5178c2702f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:23:50.827444 1084269 kapi.go:59] client config for kubernetes-upgrade-060355: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22128-904040/.minikube/profiles/kubernetes-upgrade-060355/client.crt", KeyFile:"/home/jenkins/minikube-integration/22128-904040/.minikube/profiles/kubernetes-upgrade-060355/client.key", CAFile:"/home/jenkins/minikube-integration/22128-904040/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 11:23:50.827967 1084269 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1213 11:23:50.827984 1084269 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1213 11:23:50.827990 1084269 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1213 11:23:50.827995 1084269 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1213 11:23:50.828000 1084269 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1213 11:23:50.828347 1084269 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 11:23:50.851790 1084269 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-13 11:23:12.376787648 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-13 11:23:49.865216990 +0000
	@@ -1,4 +1,4 @@
	-apiVersion: kubeadm.k8s.io/v1beta3
	+apiVersion: kubeadm.k8s.io/v1beta4
	 kind: InitConfiguration
	 localAPIEndpoint:
	   advertiseAddress: 192.168.76.2
	@@ -14,31 +14,34 @@
	   criSocket: unix:///var/run/crio/crio.sock
	   name: "kubernetes-upgrade-060355"
	   kubeletExtraArgs:
	-    node-ip: 192.168.76.2
	+    - name: "node-ip"
	+      value: "192.168.76.2"
	   taints: []
	 ---
	-apiVersion: kubeadm.k8s.io/v1beta3
	+apiVersion: kubeadm.k8s.io/v1beta4
	 kind: ClusterConfiguration
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    - name: "enable-admission-plugins"
	+      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	 controllerManager:
	   extraArgs:
	-    allocate-node-cidrs: "true"
	-    leader-elect: "false"
	+    - name: "allocate-node-cidrs"
	+      value: "true"
	+    - name: "leader-elect"
	+      value: "false"
	 scheduler:
	   extraArgs:
	-    leader-elect: "false"
	+    - name: "leader-elect"
	+      value: "false"
	 certificatesDir: /var/lib/minikube/certs
	 clusterName: mk
	 controlPlaneEndpoint: control-plane.minikube.internal:8443
	 etcd:
	   local:
	     dataDir: /var/lib/minikube/etcd
	-    extraArgs:
	-      proxy-refresh-interval: "70000"
	-kubernetesVersion: v1.28.0
	+kubernetesVersion: v1.35.0-beta.0
	 networking:
	   dnsDomain: cluster.local
	   podSubnet: "10.244.0.0/16"
	
	-- /stdout --
	I1213 11:23:50.851824 1084269 kubeadm.go:1161] stopping kube-system containers ...
	I1213 11:23:50.851837 1084269 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1213 11:23:50.851988 1084269 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 11:23:50.885888 1084269 cri.go:89] found id: ""
	I1213 11:23:50.885999 1084269 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1213 11:23:50.900437 1084269 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 11:23:50.908660 1084269 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5639 Dec 13 11:23 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Dec 13 11:23 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2039 Dec 13 11:23 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Dec 13 11:23 /etc/kubernetes/scheduler.conf
	
	I1213 11:23:50.908753 1084269 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 11:23:50.917026 1084269 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 11:23:50.925220 1084269 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 11:23:50.933522 1084269 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1213 11:23:50.933669 1084269 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 11:23:50.942005 1084269 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 11:23:50.950650 1084269 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1213 11:23:50.950751 1084269 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 11:23:50.958903 1084269 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 11:23:50.972399 1084269 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 11:23:51.029524 1084269 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 11:23:52.558111 1084269 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.528536411s)
	I1213 11:23:52.558185 1084269 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1213 11:23:52.798144 1084269 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 11:23:52.889469 1084269 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1213 11:23:52.971702 1084269 api_server.go:52] waiting for apiserver process to appear ...
	I1213 11:23:52.971794 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:23:53.471934 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:23:53.972443 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:23:54.471940 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:23:54.972572 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:23:55.471916 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:23:55.971998 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:23:56.472512 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:23:56.971969 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:23:57.471958 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:23:57.972709 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:23:58.472397 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:23:58.971916 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:23:59.472132 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:23:59.972765 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:00.472868 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:00.972293 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:01.472650 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:01.971918 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:02.473527 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:02.971920 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:03.472807 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:03.971919 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:04.471913 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:04.971912 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:05.471926 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:05.971958 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:06.471937 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:06.972615 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:07.472444 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:07.972266 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:08.471930 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:08.972776 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:09.471956 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:09.972437 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:10.471892 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:10.972403 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:11.471945 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:11.971939 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:12.471941 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:12.972687 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:13.471924 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:13.972702 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:14.471844 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:14.971891 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:15.471888 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:15.972796 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:16.472566 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:16.971920 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:17.472593 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:17.972753 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:18.472212 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:18.972220 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:19.471918 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:19.972668 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:20.472304 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:20.972517 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:21.472393 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:21.971901 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:22.472698 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:22.972664 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:23.472754 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:23.971935 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:24.471942 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:24.972276 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:25.472745 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:25.971920 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:26.472503 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:26.972675 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:27.471933 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:27.972064 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:28.472088 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:28.971853 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:29.472451 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:29.972842 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:30.471861 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:30.971927 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:31.472828 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:31.972757 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:32.471927 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:32.971900 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:33.471949 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:33.971942 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:34.471951 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:34.972032 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:35.472446 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:35.971916 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:36.472668 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:36.972781 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:37.472453 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:37.972390 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:38.472620 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:38.972603 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:39.471918 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:39.972607 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:40.471938 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:40.972645 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:41.472306 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:41.971869 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:42.472632 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:42.972312 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:43.472242 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:43.972878 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:44.472592 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:44.972753 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:45.471923 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:45.972590 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:46.471963 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:46.972590 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:47.471940 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:47.972748 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:48.471930 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:48.972735 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:49.472605 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:49.971908 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:50.471936 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:50.972909 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:51.472603 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:51.971898 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:52.472562 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:52.972176 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:24:52.972257 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:24:53.019980 1084269 cri.go:89] found id: ""
	I1213 11:24:53.020003 1084269 logs.go:282] 0 containers: []
	W1213 11:24:53.020011 1084269 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:24:53.020018 1084269 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:24:53.020075 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:24:53.053601 1084269 cri.go:89] found id: ""
	I1213 11:24:53.053623 1084269 logs.go:282] 0 containers: []
	W1213 11:24:53.053638 1084269 logs.go:284] No container was found matching "etcd"
	I1213 11:24:53.053644 1084269 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:24:53.053702 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:24:53.081049 1084269 cri.go:89] found id: ""
	I1213 11:24:53.081073 1084269 logs.go:282] 0 containers: []
	W1213 11:24:53.081081 1084269 logs.go:284] No container was found matching "coredns"
	I1213 11:24:53.081088 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:24:53.081148 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:24:53.106734 1084269 cri.go:89] found id: ""
	I1213 11:24:53.106760 1084269 logs.go:282] 0 containers: []
	W1213 11:24:53.106769 1084269 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:24:53.106776 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:24:53.106837 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:24:53.132226 1084269 cri.go:89] found id: ""
	I1213 11:24:53.132254 1084269 logs.go:282] 0 containers: []
	W1213 11:24:53.132263 1084269 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:24:53.132270 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:24:53.132328 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:24:53.162367 1084269 cri.go:89] found id: ""
	I1213 11:24:53.162391 1084269 logs.go:282] 0 containers: []
	W1213 11:24:53.162400 1084269 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:24:53.162406 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:24:53.162469 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:24:53.186409 1084269 cri.go:89] found id: ""
	I1213 11:24:53.186433 1084269 logs.go:282] 0 containers: []
	W1213 11:24:53.186442 1084269 logs.go:284] No container was found matching "kindnet"
	I1213 11:24:53.186449 1084269 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:24:53.186512 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:24:53.211397 1084269 cri.go:89] found id: ""
	I1213 11:24:53.211422 1084269 logs.go:282] 0 containers: []
	W1213 11:24:53.211430 1084269 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:24:53.211440 1084269 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:24:53.211451 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:24:53.613681 1084269 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:24:53.613708 1084269 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:24:53.613723 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:24:53.644809 1084269 logs.go:123] Gathering logs for container status ...
	I1213 11:24:53.644847 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:24:53.676049 1084269 logs.go:123] Gathering logs for kubelet ...
	I1213 11:24:53.676079 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:24:53.743085 1084269 logs.go:123] Gathering logs for dmesg ...
	I1213 11:24:53.743125 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:24:56.265230 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:56.274997 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:24:56.275068 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:24:56.304118 1084269 cri.go:89] found id: ""
	I1213 11:24:56.304142 1084269 logs.go:282] 0 containers: []
	W1213 11:24:56.304151 1084269 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:24:56.304157 1084269 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:24:56.304217 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:24:56.330712 1084269 cri.go:89] found id: ""
	I1213 11:24:56.330735 1084269 logs.go:282] 0 containers: []
	W1213 11:24:56.330744 1084269 logs.go:284] No container was found matching "etcd"
	I1213 11:24:56.330750 1084269 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:24:56.330810 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:24:56.362014 1084269 cri.go:89] found id: ""
	I1213 11:24:56.362036 1084269 logs.go:282] 0 containers: []
	W1213 11:24:56.362044 1084269 logs.go:284] No container was found matching "coredns"
	I1213 11:24:56.362050 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:24:56.362107 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:24:56.391475 1084269 cri.go:89] found id: ""
	I1213 11:24:56.391504 1084269 logs.go:282] 0 containers: []
	W1213 11:24:56.391514 1084269 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:24:56.391523 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:24:56.391606 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:24:56.420698 1084269 cri.go:89] found id: ""
	I1213 11:24:56.420723 1084269 logs.go:282] 0 containers: []
	W1213 11:24:56.420733 1084269 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:24:56.420739 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:24:56.420851 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:24:56.447095 1084269 cri.go:89] found id: ""
	I1213 11:24:56.447122 1084269 logs.go:282] 0 containers: []
	W1213 11:24:56.447132 1084269 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:24:56.447139 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:24:56.447198 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:24:56.473462 1084269 cri.go:89] found id: ""
	I1213 11:24:56.473489 1084269 logs.go:282] 0 containers: []
	W1213 11:24:56.473498 1084269 logs.go:284] No container was found matching "kindnet"
	I1213 11:24:56.473504 1084269 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:24:56.473591 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:24:56.501245 1084269 cri.go:89] found id: ""
	I1213 11:24:56.501269 1084269 logs.go:282] 0 containers: []
	W1213 11:24:56.501277 1084269 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:24:56.501286 1084269 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:24:56.501297 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:24:56.570192 1084269 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:24:56.570219 1084269 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:24:56.570240 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:24:56.601883 1084269 logs.go:123] Gathering logs for container status ...
	I1213 11:24:56.601920 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:24:56.630297 1084269 logs.go:123] Gathering logs for kubelet ...
	I1213 11:24:56.630326 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:24:56.696009 1084269 logs.go:123] Gathering logs for dmesg ...
	I1213 11:24:56.696046 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:24:59.212650 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:24:59.222723 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:24:59.222796 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:24:59.247201 1084269 cri.go:89] found id: ""
	I1213 11:24:59.247229 1084269 logs.go:282] 0 containers: []
	W1213 11:24:59.247240 1084269 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:24:59.247247 1084269 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:24:59.247306 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:24:59.272235 1084269 cri.go:89] found id: ""
	I1213 11:24:59.272262 1084269 logs.go:282] 0 containers: []
	W1213 11:24:59.272272 1084269 logs.go:284] No container was found matching "etcd"
	I1213 11:24:59.272278 1084269 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:24:59.272340 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:24:59.297496 1084269 cri.go:89] found id: ""
	I1213 11:24:59.297519 1084269 logs.go:282] 0 containers: []
	W1213 11:24:59.297528 1084269 logs.go:284] No container was found matching "coredns"
	I1213 11:24:59.297556 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:24:59.297618 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:24:59.324624 1084269 cri.go:89] found id: ""
	I1213 11:24:59.324647 1084269 logs.go:282] 0 containers: []
	W1213 11:24:59.324656 1084269 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:24:59.324663 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:24:59.324722 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:24:59.352086 1084269 cri.go:89] found id: ""
	I1213 11:24:59.352108 1084269 logs.go:282] 0 containers: []
	W1213 11:24:59.352116 1084269 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:24:59.352123 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:24:59.352181 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:24:59.386154 1084269 cri.go:89] found id: ""
	I1213 11:24:59.386224 1084269 logs.go:282] 0 containers: []
	W1213 11:24:59.386248 1084269 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:24:59.386267 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:24:59.386356 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:24:59.411003 1084269 cri.go:89] found id: ""
	I1213 11:24:59.411071 1084269 logs.go:282] 0 containers: []
	W1213 11:24:59.411094 1084269 logs.go:284] No container was found matching "kindnet"
	I1213 11:24:59.411114 1084269 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:24:59.411189 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:24:59.437422 1084269 cri.go:89] found id: ""
	I1213 11:24:59.437497 1084269 logs.go:282] 0 containers: []
	W1213 11:24:59.437520 1084269 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:24:59.437585 1084269 logs.go:123] Gathering logs for kubelet ...
	I1213 11:24:59.437605 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:24:59.504260 1084269 logs.go:123] Gathering logs for dmesg ...
	I1213 11:24:59.504295 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:24:59.521059 1084269 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:24:59.521136 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:24:59.622685 1084269 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:24:59.622710 1084269 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:24:59.622751 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:24:59.658725 1084269 logs.go:123] Gathering logs for container status ...
	I1213 11:24:59.658767 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:25:02.195482 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:25:02.205528 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:25:02.205619 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:25:02.232740 1084269 cri.go:89] found id: ""
	I1213 11:25:02.232764 1084269 logs.go:282] 0 containers: []
	W1213 11:25:02.232772 1084269 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:25:02.232785 1084269 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:25:02.232857 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:25:02.258548 1084269 cri.go:89] found id: ""
	I1213 11:25:02.258574 1084269 logs.go:282] 0 containers: []
	W1213 11:25:02.258583 1084269 logs.go:284] No container was found matching "etcd"
	I1213 11:25:02.258589 1084269 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:25:02.258647 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:25:02.283030 1084269 cri.go:89] found id: ""
	I1213 11:25:02.283057 1084269 logs.go:282] 0 containers: []
	W1213 11:25:02.283066 1084269 logs.go:284] No container was found matching "coredns"
	I1213 11:25:02.283072 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:25:02.283130 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:25:02.308737 1084269 cri.go:89] found id: ""
	I1213 11:25:02.308763 1084269 logs.go:282] 0 containers: []
	W1213 11:25:02.308772 1084269 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:25:02.308778 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:25:02.308836 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:25:02.335082 1084269 cri.go:89] found id: ""
	I1213 11:25:02.335107 1084269 logs.go:282] 0 containers: []
	W1213 11:25:02.335116 1084269 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:25:02.335123 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:25:02.335182 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:25:02.362525 1084269 cri.go:89] found id: ""
	I1213 11:25:02.362552 1084269 logs.go:282] 0 containers: []
	W1213 11:25:02.362561 1084269 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:25:02.362567 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:25:02.362624 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:25:02.388889 1084269 cri.go:89] found id: ""
	I1213 11:25:02.388916 1084269 logs.go:282] 0 containers: []
	W1213 11:25:02.388925 1084269 logs.go:284] No container was found matching "kindnet"
	I1213 11:25:02.388931 1084269 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:25:02.389037 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:25:02.416533 1084269 cri.go:89] found id: ""
	I1213 11:25:02.416557 1084269 logs.go:282] 0 containers: []
	W1213 11:25:02.416566 1084269 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:25:02.416576 1084269 logs.go:123] Gathering logs for container status ...
	I1213 11:25:02.416588 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:25:02.447638 1084269 logs.go:123] Gathering logs for kubelet ...
	I1213 11:25:02.447672 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:25:02.516850 1084269 logs.go:123] Gathering logs for dmesg ...
	I1213 11:25:02.516892 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:25:02.534175 1084269 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:25:02.534207 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:25:02.601631 1084269 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:25:02.601671 1084269 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:25:02.601685 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:25:05.133988 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:25:05.144206 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:25:05.144273 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:25:05.171293 1084269 cri.go:89] found id: ""
	I1213 11:25:05.171325 1084269 logs.go:282] 0 containers: []
	W1213 11:25:05.171335 1084269 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:25:05.171342 1084269 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:25:05.171400 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:25:05.197637 1084269 cri.go:89] found id: ""
	I1213 11:25:05.197669 1084269 logs.go:282] 0 containers: []
	W1213 11:25:05.197678 1084269 logs.go:284] No container was found matching "etcd"
	I1213 11:25:05.197684 1084269 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:25:05.197753 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:25:05.223362 1084269 cri.go:89] found id: ""
	I1213 11:25:05.223390 1084269 logs.go:282] 0 containers: []
	W1213 11:25:05.223400 1084269 logs.go:284] No container was found matching "coredns"
	I1213 11:25:05.223430 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:25:05.223504 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:25:05.249301 1084269 cri.go:89] found id: ""
	I1213 11:25:05.249330 1084269 logs.go:282] 0 containers: []
	W1213 11:25:05.249339 1084269 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:25:05.249346 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:25:05.249459 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:25:05.276360 1084269 cri.go:89] found id: ""
	I1213 11:25:05.276387 1084269 logs.go:282] 0 containers: []
	W1213 11:25:05.276396 1084269 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:25:05.276403 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:25:05.276515 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:25:05.301419 1084269 cri.go:89] found id: ""
	I1213 11:25:05.301446 1084269 logs.go:282] 0 containers: []
	W1213 11:25:05.301456 1084269 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:25:05.301462 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:25:05.301589 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:25:05.327241 1084269 cri.go:89] found id: ""
	I1213 11:25:05.327269 1084269 logs.go:282] 0 containers: []
	W1213 11:25:05.327278 1084269 logs.go:284] No container was found matching "kindnet"
	I1213 11:25:05.327284 1084269 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:25:05.327368 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:25:05.353300 1084269 cri.go:89] found id: ""
	I1213 11:25:05.353326 1084269 logs.go:282] 0 containers: []
	W1213 11:25:05.353336 1084269 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:25:05.353345 1084269 logs.go:123] Gathering logs for kubelet ...
	I1213 11:25:05.353386 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:25:05.425541 1084269 logs.go:123] Gathering logs for dmesg ...
	I1213 11:25:05.425583 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:25:05.442084 1084269 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:25:05.442122 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:25:05.508993 1084269 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:25:05.509020 1084269 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:25:05.509048 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:25:05.540569 1084269 logs.go:123] Gathering logs for container status ...
	I1213 11:25:05.540604 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:25:08.070749 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:25:08.081293 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:25:08.081389 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:25:08.106876 1084269 cri.go:89] found id: ""
	I1213 11:25:08.106903 1084269 logs.go:282] 0 containers: []
	W1213 11:25:08.106911 1084269 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:25:08.106917 1084269 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:25:08.106993 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:25:08.132470 1084269 cri.go:89] found id: ""
	I1213 11:25:08.132506 1084269 logs.go:282] 0 containers: []
	W1213 11:25:08.132515 1084269 logs.go:284] No container was found matching "etcd"
	I1213 11:25:08.132521 1084269 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:25:08.132630 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:25:08.160248 1084269 cri.go:89] found id: ""
	I1213 11:25:08.160282 1084269 logs.go:282] 0 containers: []
	W1213 11:25:08.160291 1084269 logs.go:284] No container was found matching "coredns"
	I1213 11:25:08.160297 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:25:08.160397 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:25:08.189930 1084269 cri.go:89] found id: ""
	I1213 11:25:08.189957 1084269 logs.go:282] 0 containers: []
	W1213 11:25:08.189967 1084269 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:25:08.189973 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:25:08.190083 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:25:08.214477 1084269 cri.go:89] found id: ""
	I1213 11:25:08.214501 1084269 logs.go:282] 0 containers: []
	W1213 11:25:08.214510 1084269 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:25:08.214516 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:25:08.214593 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:25:08.243769 1084269 cri.go:89] found id: ""
	I1213 11:25:08.243804 1084269 logs.go:282] 0 containers: []
	W1213 11:25:08.243814 1084269 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:25:08.243836 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:25:08.243918 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:25:08.272995 1084269 cri.go:89] found id: ""
	I1213 11:25:08.273040 1084269 logs.go:282] 0 containers: []
	W1213 11:25:08.273076 1084269 logs.go:284] No container was found matching "kindnet"
	I1213 11:25:08.273090 1084269 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:25:08.273173 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:25:08.301130 1084269 cri.go:89] found id: ""
	I1213 11:25:08.301164 1084269 logs.go:282] 0 containers: []
	W1213 11:25:08.301173 1084269 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:25:08.301182 1084269 logs.go:123] Gathering logs for kubelet ...
	I1213 11:25:08.301221 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:25:08.370485 1084269 logs.go:123] Gathering logs for dmesg ...
	I1213 11:25:08.370520 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:25:08.386451 1084269 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:25:08.386480 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:25:08.450590 1084269 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:25:08.450652 1084269 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:25:08.450681 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:25:08.480386 1084269 logs.go:123] Gathering logs for container status ...
	I1213 11:25:08.480421 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:25:11.014165 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:25:11.026226 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:25:11.026298 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:25:11.063348 1084269 cri.go:89] found id: ""
	I1213 11:25:11.063385 1084269 logs.go:282] 0 containers: []
	W1213 11:25:11.063395 1084269 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:25:11.063408 1084269 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:25:11.063475 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:25:11.093260 1084269 cri.go:89] found id: ""
	I1213 11:25:11.093284 1084269 logs.go:282] 0 containers: []
	W1213 11:25:11.093293 1084269 logs.go:284] No container was found matching "etcd"
	I1213 11:25:11.093299 1084269 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:25:11.093360 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:25:11.122479 1084269 cri.go:89] found id: ""
	I1213 11:25:11.122513 1084269 logs.go:282] 0 containers: []
	W1213 11:25:11.122523 1084269 logs.go:284] No container was found matching "coredns"
	I1213 11:25:11.122529 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:25:11.122593 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:25:11.149514 1084269 cri.go:89] found id: ""
	I1213 11:25:11.149571 1084269 logs.go:282] 0 containers: []
	W1213 11:25:11.149580 1084269 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:25:11.149587 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:25:11.149659 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:25:11.175989 1084269 cri.go:89] found id: ""
	I1213 11:25:11.176015 1084269 logs.go:282] 0 containers: []
	W1213 11:25:11.176024 1084269 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:25:11.176030 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:25:11.176091 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:25:11.206185 1084269 cri.go:89] found id: ""
	I1213 11:25:11.206212 1084269 logs.go:282] 0 containers: []
	W1213 11:25:11.206221 1084269 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:25:11.206229 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:25:11.206287 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:25:11.233968 1084269 cri.go:89] found id: ""
	I1213 11:25:11.233992 1084269 logs.go:282] 0 containers: []
	W1213 11:25:11.234000 1084269 logs.go:284] No container was found matching "kindnet"
	I1213 11:25:11.234007 1084269 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:25:11.234064 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:25:11.263420 1084269 cri.go:89] found id: ""
	I1213 11:25:11.263446 1084269 logs.go:282] 0 containers: []
	W1213 11:25:11.263456 1084269 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:25:11.263466 1084269 logs.go:123] Gathering logs for container status ...
	I1213 11:25:11.263478 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:25:11.290902 1084269 logs.go:123] Gathering logs for kubelet ...
	I1213 11:25:11.290932 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:25:11.358376 1084269 logs.go:123] Gathering logs for dmesg ...
	I1213 11:25:11.358410 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:25:11.374872 1084269 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:25:11.374907 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:25:11.442174 1084269 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:25:11.442198 1084269 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:25:11.442212 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:25:13.977898 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:25:13.988064 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:25:13.988131 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:25:14.033747 1084269 cri.go:89] found id: ""
	I1213 11:25:14.033773 1084269 logs.go:282] 0 containers: []
	W1213 11:25:14.033782 1084269 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:25:14.033789 1084269 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:25:14.033853 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:25:14.064959 1084269 cri.go:89] found id: ""
	I1213 11:25:14.064981 1084269 logs.go:282] 0 containers: []
	W1213 11:25:14.064989 1084269 logs.go:284] No container was found matching "etcd"
	I1213 11:25:14.064995 1084269 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:25:14.065063 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:25:14.094313 1084269 cri.go:89] found id: ""
	I1213 11:25:14.094340 1084269 logs.go:282] 0 containers: []
	W1213 11:25:14.094349 1084269 logs.go:284] No container was found matching "coredns"
	I1213 11:25:14.094355 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:25:14.094415 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:25:14.119741 1084269 cri.go:89] found id: ""
	I1213 11:25:14.119767 1084269 logs.go:282] 0 containers: []
	W1213 11:25:14.119777 1084269 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:25:14.119783 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:25:14.119840 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:25:14.147149 1084269 cri.go:89] found id: ""
	I1213 11:25:14.147176 1084269 logs.go:282] 0 containers: []
	W1213 11:25:14.147185 1084269 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:25:14.147192 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:25:14.147251 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:25:14.174457 1084269 cri.go:89] found id: ""
	I1213 11:25:14.174480 1084269 logs.go:282] 0 containers: []
	W1213 11:25:14.174489 1084269 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:25:14.174495 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:25:14.174552 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:25:14.199426 1084269 cri.go:89] found id: ""
	I1213 11:25:14.199451 1084269 logs.go:282] 0 containers: []
	W1213 11:25:14.199460 1084269 logs.go:284] No container was found matching "kindnet"
	I1213 11:25:14.199467 1084269 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:25:14.199523 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:25:14.225052 1084269 cri.go:89] found id: ""
	I1213 11:25:14.225077 1084269 logs.go:282] 0 containers: []
	W1213 11:25:14.225085 1084269 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:25:14.225104 1084269 logs.go:123] Gathering logs for kubelet ...
	I1213 11:25:14.225116 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:25:14.292512 1084269 logs.go:123] Gathering logs for dmesg ...
	I1213 11:25:14.292546 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:25:14.308752 1084269 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:25:14.308782 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:25:14.373773 1084269 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:25:14.373806 1084269 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:25:14.373824 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:25:14.407089 1084269 logs.go:123] Gathering logs for container status ...
	I1213 11:25:14.407126 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:25:16.940909 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:25:16.951004 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:25:16.951078 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:25:16.986894 1084269 cri.go:89] found id: ""
	I1213 11:25:16.986916 1084269 logs.go:282] 0 containers: []
	W1213 11:25:16.986925 1084269 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:25:16.986930 1084269 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:25:16.986987 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:25:17.017638 1084269 cri.go:89] found id: ""
	I1213 11:25:17.018033 1084269 logs.go:282] 0 containers: []
	W1213 11:25:17.018044 1084269 logs.go:284] No container was found matching "etcd"
	I1213 11:25:17.018050 1084269 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:25:17.018114 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:25:17.050600 1084269 cri.go:89] found id: ""
	I1213 11:25:17.050623 1084269 logs.go:282] 0 containers: []
	W1213 11:25:17.050632 1084269 logs.go:284] No container was found matching "coredns"
	I1213 11:25:17.050637 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:25:17.050697 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:25:17.077575 1084269 cri.go:89] found id: ""
	I1213 11:25:17.077599 1084269 logs.go:282] 0 containers: []
	W1213 11:25:17.077607 1084269 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:25:17.077614 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:25:17.077688 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:25:17.105164 1084269 cri.go:89] found id: ""
	I1213 11:25:17.105191 1084269 logs.go:282] 0 containers: []
	W1213 11:25:17.105224 1084269 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:25:17.105233 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:25:17.105311 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:25:17.131413 1084269 cri.go:89] found id: ""
	I1213 11:25:17.131445 1084269 logs.go:282] 0 containers: []
	W1213 11:25:17.131455 1084269 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:25:17.131462 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:25:17.131523 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:25:17.159605 1084269 cri.go:89] found id: ""
	I1213 11:25:17.159629 1084269 logs.go:282] 0 containers: []
	W1213 11:25:17.159638 1084269 logs.go:284] No container was found matching "kindnet"
	I1213 11:25:17.159644 1084269 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:25:17.159706 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:25:17.189059 1084269 cri.go:89] found id: ""
	I1213 11:25:17.189129 1084269 logs.go:282] 0 containers: []
	W1213 11:25:17.189154 1084269 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:25:17.189177 1084269 logs.go:123] Gathering logs for kubelet ...
	I1213 11:25:17.189215 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:25:17.258073 1084269 logs.go:123] Gathering logs for dmesg ...
	I1213 11:25:17.258112 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:25:17.274391 1084269 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:25:17.274420 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:25:17.337055 1084269 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:25:17.337080 1084269 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:25:17.337096 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:25:17.368226 1084269 logs.go:123] Gathering logs for container status ...
	I1213 11:25:17.368261 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:25:19.898855 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:25:19.910620 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:25:19.910694 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:25:19.938172 1084269 cri.go:89] found id: ""
	I1213 11:25:19.938209 1084269 logs.go:282] 0 containers: []
	W1213 11:25:19.938218 1084269 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:25:19.938224 1084269 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:25:19.938287 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:25:19.969184 1084269 cri.go:89] found id: ""
	I1213 11:25:19.969208 1084269 logs.go:282] 0 containers: []
	W1213 11:25:19.969218 1084269 logs.go:284] No container was found matching "etcd"
	I1213 11:25:19.969224 1084269 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:25:19.969284 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:25:19.996917 1084269 cri.go:89] found id: ""
	I1213 11:25:19.996945 1084269 logs.go:282] 0 containers: []
	W1213 11:25:19.996955 1084269 logs.go:284] No container was found matching "coredns"
	I1213 11:25:19.996961 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:25:19.997027 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:25:20.034953 1084269 cri.go:89] found id: ""
	I1213 11:25:20.034987 1084269 logs.go:282] 0 containers: []
	W1213 11:25:20.034997 1084269 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:25:20.035004 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:25:20.035065 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:25:20.068107 1084269 cri.go:89] found id: ""
	I1213 11:25:20.068149 1084269 logs.go:282] 0 containers: []
	W1213 11:25:20.068162 1084269 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:25:20.068172 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:25:20.068251 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:25:20.098331 1084269 cri.go:89] found id: ""
	I1213 11:25:20.098409 1084269 logs.go:282] 0 containers: []
	W1213 11:25:20.098432 1084269 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:25:20.098444 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:25:20.098532 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:25:20.126004 1084269 cri.go:89] found id: ""
	I1213 11:25:20.126035 1084269 logs.go:282] 0 containers: []
	W1213 11:25:20.126048 1084269 logs.go:284] No container was found matching "kindnet"
	I1213 11:25:20.126058 1084269 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:25:20.126143 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:25:20.152404 1084269 cri.go:89] found id: ""
	I1213 11:25:20.152429 1084269 logs.go:282] 0 containers: []
	W1213 11:25:20.152438 1084269 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:25:20.152448 1084269 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:25:20.152459 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:25:20.187211 1084269 logs.go:123] Gathering logs for container status ...
	I1213 11:25:20.187247 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:25:20.217915 1084269 logs.go:123] Gathering logs for kubelet ...
	I1213 11:25:20.217959 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:25:20.285745 1084269 logs.go:123] Gathering logs for dmesg ...
	I1213 11:25:20.285783 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:25:20.302916 1084269 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:25:20.302950 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:25:20.372631 1084269 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:25:22.873669 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:25:22.883759 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:25:22.883824 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:25:22.908754 1084269 cri.go:89] found id: ""
	I1213 11:25:22.908780 1084269 logs.go:282] 0 containers: []
	W1213 11:25:22.908789 1084269 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:25:22.908796 1084269 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:25:22.908856 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:25:22.935904 1084269 cri.go:89] found id: ""
	I1213 11:25:22.935929 1084269 logs.go:282] 0 containers: []
	W1213 11:25:22.935938 1084269 logs.go:284] No container was found matching "etcd"
	I1213 11:25:22.935943 1084269 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:25:22.935999 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:25:22.962289 1084269 cri.go:89] found id: ""
	I1213 11:25:22.962312 1084269 logs.go:282] 0 containers: []
	W1213 11:25:22.962321 1084269 logs.go:284] No container was found matching "coredns"
	I1213 11:25:22.962326 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:25:22.962384 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:25:23.015281 1084269 cri.go:89] found id: ""
	I1213 11:25:23.015323 1084269 logs.go:282] 0 containers: []
	W1213 11:25:23.015333 1084269 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:25:23.015341 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:25:23.015404 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:25:23.051057 1084269 cri.go:89] found id: ""
	I1213 11:25:23.051085 1084269 logs.go:282] 0 containers: []
	W1213 11:25:23.051095 1084269 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:25:23.051102 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:25:23.051163 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:25:23.077315 1084269 cri.go:89] found id: ""
	I1213 11:25:23.077341 1084269 logs.go:282] 0 containers: []
	W1213 11:25:23.077350 1084269 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:25:23.077357 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:25:23.077428 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:25:23.102928 1084269 cri.go:89] found id: ""
	I1213 11:25:23.102956 1084269 logs.go:282] 0 containers: []
	W1213 11:25:23.102965 1084269 logs.go:284] No container was found matching "kindnet"
	I1213 11:25:23.102972 1084269 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:25:23.103035 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:25:23.131933 1084269 cri.go:89] found id: ""
	I1213 11:25:23.131965 1084269 logs.go:282] 0 containers: []
	W1213 11:25:23.131975 1084269 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:25:23.131984 1084269 logs.go:123] Gathering logs for kubelet ...
	I1213 11:25:23.131996 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:25:23.198465 1084269 logs.go:123] Gathering logs for dmesg ...
	I1213 11:25:23.198500 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:25:23.214777 1084269 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:25:23.214808 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:25:23.279927 1084269 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:25:23.279946 1084269 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:25:23.279959 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:25:23.310774 1084269 logs.go:123] Gathering logs for container status ...
	I1213 11:25:23.310813 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:25:25.838334 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:25:25.848800 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:25:25.848874 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:25:25.873161 1084269 cri.go:89] found id: ""
	I1213 11:25:25.873188 1084269 logs.go:282] 0 containers: []
	W1213 11:25:25.873198 1084269 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:25:25.873204 1084269 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:25:25.873269 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:25:25.898873 1084269 cri.go:89] found id: ""
	I1213 11:25:25.898899 1084269 logs.go:282] 0 containers: []
	W1213 11:25:25.898908 1084269 logs.go:284] No container was found matching "etcd"
	I1213 11:25:25.898915 1084269 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:25:25.898973 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:25:25.924447 1084269 cri.go:89] found id: ""
	I1213 11:25:25.924472 1084269 logs.go:282] 0 containers: []
	W1213 11:25:25.924481 1084269 logs.go:284] No container was found matching "coredns"
	I1213 11:25:25.924487 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:25:25.924544 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:25:25.949721 1084269 cri.go:89] found id: ""
	I1213 11:25:25.949750 1084269 logs.go:282] 0 containers: []
	W1213 11:25:25.949759 1084269 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:25:25.949765 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:25:25.949822 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:25:25.976607 1084269 cri.go:89] found id: ""
	I1213 11:25:25.976635 1084269 logs.go:282] 0 containers: []
	W1213 11:25:25.976644 1084269 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:25:25.976650 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:25:25.976708 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:25:26.014618 1084269 cri.go:89] found id: ""
	I1213 11:25:26.014649 1084269 logs.go:282] 0 containers: []
	W1213 11:25:26.014659 1084269 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:25:26.014668 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:25:26.014733 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:25:26.048093 1084269 cri.go:89] found id: ""
	I1213 11:25:26.048123 1084269 logs.go:282] 0 containers: []
	W1213 11:25:26.048133 1084269 logs.go:284] No container was found matching "kindnet"
	I1213 11:25:26.048141 1084269 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:25:26.048211 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:25:26.074959 1084269 cri.go:89] found id: ""
	I1213 11:25:26.074991 1084269 logs.go:282] 0 containers: []
	W1213 11:25:26.075001 1084269 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:25:26.075011 1084269 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:25:26.075047 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:25:26.138712 1084269 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:25:26.138732 1084269 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:25:26.138746 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:25:26.170036 1084269 logs.go:123] Gathering logs for container status ...
	I1213 11:25:26.170068 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:25:26.201030 1084269 logs.go:123] Gathering logs for kubelet ...
	I1213 11:25:26.201107 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:25:26.268791 1084269 logs.go:123] Gathering logs for dmesg ...
	I1213 11:25:26.268827 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:25:28.785694 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:25:28.795595 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:25:28.795669 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:25:28.827002 1084269 cri.go:89] found id: ""
	I1213 11:25:28.827027 1084269 logs.go:282] 0 containers: []
	W1213 11:25:28.827035 1084269 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:25:28.827042 1084269 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:25:28.827097 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:25:28.852419 1084269 cri.go:89] found id: ""
	I1213 11:25:28.852445 1084269 logs.go:282] 0 containers: []
	W1213 11:25:28.852455 1084269 logs.go:284] No container was found matching "etcd"
	I1213 11:25:28.852461 1084269 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:25:28.852521 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:25:28.878781 1084269 cri.go:89] found id: ""
	I1213 11:25:28.878808 1084269 logs.go:282] 0 containers: []
	W1213 11:25:28.878818 1084269 logs.go:284] No container was found matching "coredns"
	I1213 11:25:28.878825 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:25:28.878905 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:25:28.904482 1084269 cri.go:89] found id: ""
	I1213 11:25:28.904516 1084269 logs.go:282] 0 containers: []
	W1213 11:25:28.904525 1084269 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:25:28.904531 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:25:28.904596 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:25:28.929322 1084269 cri.go:89] found id: ""
	I1213 11:25:28.929357 1084269 logs.go:282] 0 containers: []
	W1213 11:25:28.929367 1084269 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:25:28.929393 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:25:28.929473 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:25:28.956057 1084269 cri.go:89] found id: ""
	I1213 11:25:28.956135 1084269 logs.go:282] 0 containers: []
	W1213 11:25:28.956158 1084269 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:25:28.956178 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:25:28.956269 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:25:28.985659 1084269 cri.go:89] found id: ""
	I1213 11:25:28.985726 1084269 logs.go:282] 0 containers: []
	W1213 11:25:28.985749 1084269 logs.go:284] No container was found matching "kindnet"
	I1213 11:25:28.985771 1084269 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:25:28.985858 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:25:29.022812 1084269 cri.go:89] found id: ""
	I1213 11:25:29.022879 1084269 logs.go:282] 0 containers: []
	W1213 11:25:29.022901 1084269 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:25:29.022925 1084269 logs.go:123] Gathering logs for kubelet ...
	I1213 11:25:29.022965 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:25:29.097735 1084269 logs.go:123] Gathering logs for dmesg ...
	I1213 11:25:29.097772 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:25:29.113812 1084269 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:25:29.113842 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:25:29.177082 1084269 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:25:29.177105 1084269 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:25:29.177119 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:25:29.207640 1084269 logs.go:123] Gathering logs for container status ...
	I1213 11:25:29.207673 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:25:31.738828 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:25:31.749445 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:25:31.749585 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:25:31.775583 1084269 cri.go:89] found id: ""
	I1213 11:25:31.775609 1084269 logs.go:282] 0 containers: []
	W1213 11:25:31.775618 1084269 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:25:31.775625 1084269 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:25:31.775685 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:25:31.803029 1084269 cri.go:89] found id: ""
	I1213 11:25:31.803068 1084269 logs.go:282] 0 containers: []
	W1213 11:25:31.803081 1084269 logs.go:284] No container was found matching "etcd"
	I1213 11:25:31.803093 1084269 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:25:31.803164 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:25:31.834251 1084269 cri.go:89] found id: ""
	I1213 11:25:31.834276 1084269 logs.go:282] 0 containers: []
	W1213 11:25:31.834285 1084269 logs.go:284] No container was found matching "coredns"
	I1213 11:25:31.834292 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:25:31.834355 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:25:31.860613 1084269 cri.go:89] found id: ""
	I1213 11:25:31.860652 1084269 logs.go:282] 0 containers: []
	W1213 11:25:31.860664 1084269 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:25:31.860671 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:25:31.860743 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:25:31.888436 1084269 cri.go:89] found id: ""
	I1213 11:25:31.888462 1084269 logs.go:282] 0 containers: []
	W1213 11:25:31.888480 1084269 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:25:31.888488 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:25:31.888557 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:25:31.916530 1084269 cri.go:89] found id: ""
	I1213 11:25:31.916606 1084269 logs.go:282] 0 containers: []
	W1213 11:25:31.916628 1084269 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:25:31.916648 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:25:31.916731 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:25:31.944748 1084269 cri.go:89] found id: ""
	I1213 11:25:31.944831 1084269 logs.go:282] 0 containers: []
	W1213 11:25:31.944856 1084269 logs.go:284] No container was found matching "kindnet"
	I1213 11:25:31.944875 1084269 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:25:31.944963 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:25:31.971030 1084269 cri.go:89] found id: ""
	I1213 11:25:31.971055 1084269 logs.go:282] 0 containers: []
	W1213 11:25:31.971064 1084269 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:25:31.971074 1084269 logs.go:123] Gathering logs for kubelet ...
	I1213 11:25:31.971085 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:25:32.052835 1084269 logs.go:123] Gathering logs for dmesg ...
	I1213 11:25:32.052886 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:25:32.070197 1084269 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:25:32.070235 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:25:32.138101 1084269 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:25:32.138135 1084269 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:25:32.138155 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:25:32.174448 1084269 logs.go:123] Gathering logs for container status ...
	I1213 11:25:32.174487 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:25:34.705354 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:25:34.717604 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:25:34.717678 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:25:34.742904 1084269 cri.go:89] found id: ""
	I1213 11:25:34.742984 1084269 logs.go:282] 0 containers: []
	W1213 11:25:34.742999 1084269 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:25:34.743007 1084269 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:25:34.743085 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:25:34.769440 1084269 cri.go:89] found id: ""
	I1213 11:25:34.769472 1084269 logs.go:282] 0 containers: []
	W1213 11:25:34.769482 1084269 logs.go:284] No container was found matching "etcd"
	I1213 11:25:34.769489 1084269 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:25:34.769579 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:25:34.794274 1084269 cri.go:89] found id: ""
	I1213 11:25:34.794301 1084269 logs.go:282] 0 containers: []
	W1213 11:25:34.794310 1084269 logs.go:284] No container was found matching "coredns"
	I1213 11:25:34.794317 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:25:34.794375 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:25:34.824195 1084269 cri.go:89] found id: ""
	I1213 11:25:34.824218 1084269 logs.go:282] 0 containers: []
	W1213 11:25:34.824227 1084269 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:25:34.824233 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:25:34.824293 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:25:34.854005 1084269 cri.go:89] found id: ""
	I1213 11:25:34.854032 1084269 logs.go:282] 0 containers: []
	W1213 11:25:34.854041 1084269 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:25:34.854047 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:25:34.854107 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:25:34.880588 1084269 cri.go:89] found id: ""
	I1213 11:25:34.880613 1084269 logs.go:282] 0 containers: []
	W1213 11:25:34.880622 1084269 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:25:34.880629 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:25:34.880693 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:25:34.906831 1084269 cri.go:89] found id: ""
	I1213 11:25:34.906857 1084269 logs.go:282] 0 containers: []
	W1213 11:25:34.906866 1084269 logs.go:284] No container was found matching "kindnet"
	I1213 11:25:34.906872 1084269 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:25:34.906932 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:25:34.933415 1084269 cri.go:89] found id: ""
	I1213 11:25:34.933443 1084269 logs.go:282] 0 containers: []
	W1213 11:25:34.933454 1084269 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:25:34.933464 1084269 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:25:34.933494 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:25:34.964406 1084269 logs.go:123] Gathering logs for container status ...
	I1213 11:25:34.964442 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:25:35.000191 1084269 logs.go:123] Gathering logs for kubelet ...
	I1213 11:25:35.000230 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:25:35.077732 1084269 logs.go:123] Gathering logs for dmesg ...
	I1213 11:25:35.077776 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:25:35.095606 1084269 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:25:35.095689 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:25:35.165997 1084269 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:25:37.666503 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:25:37.676880 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:25:37.676956 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:25:37.706782 1084269 cri.go:89] found id: ""
	I1213 11:25:37.706808 1084269 logs.go:282] 0 containers: []
	W1213 11:25:37.706817 1084269 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:25:37.706824 1084269 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:25:37.706893 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:25:37.738776 1084269 cri.go:89] found id: ""
	I1213 11:25:37.738803 1084269 logs.go:282] 0 containers: []
	W1213 11:25:37.738812 1084269 logs.go:284] No container was found matching "etcd"
	I1213 11:25:37.738821 1084269 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:25:37.738881 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:25:37.765840 1084269 cri.go:89] found id: ""
	I1213 11:25:37.765863 1084269 logs.go:282] 0 containers: []
	W1213 11:25:37.765872 1084269 logs.go:284] No container was found matching "coredns"
	I1213 11:25:37.765878 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:25:37.765935 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:25:37.792772 1084269 cri.go:89] found id: ""
	I1213 11:25:37.792793 1084269 logs.go:282] 0 containers: []
	W1213 11:25:37.792802 1084269 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:25:37.792808 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:25:37.792865 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:25:37.821807 1084269 cri.go:89] found id: ""
	I1213 11:25:37.821833 1084269 logs.go:282] 0 containers: []
	W1213 11:25:37.821848 1084269 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:25:37.821855 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:25:37.821915 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:25:37.847109 1084269 cri.go:89] found id: ""
	I1213 11:25:37.847137 1084269 logs.go:282] 0 containers: []
	W1213 11:25:37.847147 1084269 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:25:37.847154 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:25:37.847210 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:25:37.871835 1084269 cri.go:89] found id: ""
	I1213 11:25:37.871867 1084269 logs.go:282] 0 containers: []
	W1213 11:25:37.871877 1084269 logs.go:284] No container was found matching "kindnet"
	I1213 11:25:37.871883 1084269 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:25:37.871943 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:25:37.897105 1084269 cri.go:89] found id: ""
	I1213 11:25:37.897131 1084269 logs.go:282] 0 containers: []
	W1213 11:25:37.897147 1084269 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:25:37.897157 1084269 logs.go:123] Gathering logs for container status ...
	I1213 11:25:37.897168 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:25:37.929343 1084269 logs.go:123] Gathering logs for kubelet ...
	I1213 11:25:37.929375 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:25:37.998200 1084269 logs.go:123] Gathering logs for dmesg ...
	I1213 11:25:37.998247 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:25:38.016178 1084269 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:25:38.016216 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:25:38.091927 1084269 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:25:38.091949 1084269 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:25:38.091963 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:25:40.624386 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:25:40.634430 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:25:40.634507 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:25:40.660794 1084269 cri.go:89] found id: ""
	I1213 11:25:40.660822 1084269 logs.go:282] 0 containers: []
	W1213 11:25:40.660831 1084269 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:25:40.660838 1084269 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:25:40.660908 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:25:40.689869 1084269 cri.go:89] found id: ""
	I1213 11:25:40.689892 1084269 logs.go:282] 0 containers: []
	W1213 11:25:40.689901 1084269 logs.go:284] No container was found matching "etcd"
	I1213 11:25:40.689907 1084269 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:25:40.689974 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:25:40.723466 1084269 cri.go:89] found id: ""
	I1213 11:25:40.723490 1084269 logs.go:282] 0 containers: []
	W1213 11:25:40.723500 1084269 logs.go:284] No container was found matching "coredns"
	I1213 11:25:40.723507 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:25:40.723577 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:25:40.747917 1084269 cri.go:89] found id: ""
	I1213 11:25:40.747942 1084269 logs.go:282] 0 containers: []
	W1213 11:25:40.747950 1084269 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:25:40.747957 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:25:40.748015 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:25:40.772902 1084269 cri.go:89] found id: ""
	I1213 11:25:40.772928 1084269 logs.go:282] 0 containers: []
	W1213 11:25:40.772937 1084269 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:25:40.772944 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:25:40.773001 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:25:40.800324 1084269 cri.go:89] found id: ""
	I1213 11:25:40.800352 1084269 logs.go:282] 0 containers: []
	W1213 11:25:40.800361 1084269 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:25:40.800369 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:25:40.800440 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:25:40.825057 1084269 cri.go:89] found id: ""
	I1213 11:25:40.825083 1084269 logs.go:282] 0 containers: []
	W1213 11:25:40.825092 1084269 logs.go:284] No container was found matching "kindnet"
	I1213 11:25:40.825099 1084269 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:25:40.825161 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:25:40.851264 1084269 cri.go:89] found id: ""
	I1213 11:25:40.851287 1084269 logs.go:282] 0 containers: []
	W1213 11:25:40.851295 1084269 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:25:40.851304 1084269 logs.go:123] Gathering logs for kubelet ...
	I1213 11:25:40.851324 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:25:40.917645 1084269 logs.go:123] Gathering logs for dmesg ...
	I1213 11:25:40.917681 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:25:40.934580 1084269 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:25:40.934612 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:25:41.035068 1084269 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:25:41.035090 1084269 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:25:41.035103 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:25:41.067323 1084269 logs.go:123] Gathering logs for container status ...
	I1213 11:25:41.067360 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:25:43.600742 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:25:43.611752 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:25:43.611821 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:25:43.639120 1084269 cri.go:89] found id: ""
	I1213 11:25:43.639148 1084269 logs.go:282] 0 containers: []
	W1213 11:25:43.639157 1084269 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:25:43.639163 1084269 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:25:43.639230 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:25:43.682811 1084269 cri.go:89] found id: ""
	I1213 11:25:43.682835 1084269 logs.go:282] 0 containers: []
	W1213 11:25:43.682844 1084269 logs.go:284] No container was found matching "etcd"
	I1213 11:25:43.682850 1084269 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:25:43.682908 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:25:43.719620 1084269 cri.go:89] found id: ""
	I1213 11:25:43.719640 1084269 logs.go:282] 0 containers: []
	W1213 11:25:43.719648 1084269 logs.go:284] No container was found matching "coredns"
	I1213 11:25:43.719654 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:25:43.719712 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:25:43.755163 1084269 cri.go:89] found id: ""
	I1213 11:25:43.755185 1084269 logs.go:282] 0 containers: []
	W1213 11:25:43.755193 1084269 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:25:43.755199 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:25:43.755261 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:25:43.785752 1084269 cri.go:89] found id: ""
	I1213 11:25:43.785774 1084269 logs.go:282] 0 containers: []
	W1213 11:25:43.785782 1084269 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:25:43.785788 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:25:43.785846 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:25:43.810663 1084269 cri.go:89] found id: ""
	I1213 11:25:43.810685 1084269 logs.go:282] 0 containers: []
	W1213 11:25:43.810693 1084269 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:25:43.810699 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:25:43.810757 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:25:43.835028 1084269 cri.go:89] found id: ""
	I1213 11:25:43.835054 1084269 logs.go:282] 0 containers: []
	W1213 11:25:43.835063 1084269 logs.go:284] No container was found matching "kindnet"
	I1213 11:25:43.835070 1084269 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:25:43.835127 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:25:43.862296 1084269 cri.go:89] found id: ""
	I1213 11:25:43.862331 1084269 logs.go:282] 0 containers: []
	W1213 11:25:43.862340 1084269 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:25:43.862350 1084269 logs.go:123] Gathering logs for kubelet ...
	I1213 11:25:43.862361 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:25:43.935829 1084269 logs.go:123] Gathering logs for dmesg ...
	I1213 11:25:43.935869 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:25:43.951764 1084269 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:25:43.951796 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:25:44.038313 1084269 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:25:44.038337 1084269 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:25:44.038353 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:25:44.068835 1084269 logs.go:123] Gathering logs for container status ...
	I1213 11:25:44.068869 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:25:46.600160 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:25:46.610151 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:25:46.610220 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:25:46.634698 1084269 cri.go:89] found id: ""
	I1213 11:25:46.634726 1084269 logs.go:282] 0 containers: []
	W1213 11:25:46.634735 1084269 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:25:46.634741 1084269 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:25:46.634803 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:25:46.664264 1084269 cri.go:89] found id: ""
	I1213 11:25:46.664291 1084269 logs.go:282] 0 containers: []
	W1213 11:25:46.664299 1084269 logs.go:284] No container was found matching "etcd"
	I1213 11:25:46.664305 1084269 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:25:46.664362 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:25:46.688310 1084269 cri.go:89] found id: ""
	I1213 11:25:46.688337 1084269 logs.go:282] 0 containers: []
	W1213 11:25:46.688346 1084269 logs.go:284] No container was found matching "coredns"
	I1213 11:25:46.688352 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:25:46.688409 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:25:46.720939 1084269 cri.go:89] found id: ""
	I1213 11:25:46.720968 1084269 logs.go:282] 0 containers: []
	W1213 11:25:46.720978 1084269 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:25:46.720985 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:25:46.721062 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:25:46.750392 1084269 cri.go:89] found id: ""
	I1213 11:25:46.750418 1084269 logs.go:282] 0 containers: []
	W1213 11:25:46.750427 1084269 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:25:46.750433 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:25:46.750491 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:25:46.774792 1084269 cri.go:89] found id: ""
	I1213 11:25:46.774819 1084269 logs.go:282] 0 containers: []
	W1213 11:25:46.774829 1084269 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:25:46.774835 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:25:46.774892 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:25:46.799418 1084269 cri.go:89] found id: ""
	I1213 11:25:46.799442 1084269 logs.go:282] 0 containers: []
	W1213 11:25:46.799451 1084269 logs.go:284] No container was found matching "kindnet"
	I1213 11:25:46.799458 1084269 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:25:46.799518 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:25:46.827113 1084269 cri.go:89] found id: ""
	I1213 11:25:46.827144 1084269 logs.go:282] 0 containers: []
	W1213 11:25:46.827154 1084269 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:25:46.827163 1084269 logs.go:123] Gathering logs for kubelet ...
	I1213 11:25:46.827175 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:25:46.897643 1084269 logs.go:123] Gathering logs for dmesg ...
	I1213 11:25:46.897683 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:25:46.913897 1084269 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:25:46.913926 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:25:46.995073 1084269 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:25:46.995096 1084269 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:25:46.995112 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:25:47.028897 1084269 logs.go:123] Gathering logs for container status ...
	I1213 11:25:47.028934 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:25:49.563269 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:25:49.573330 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:25:49.573406 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:25:49.599186 1084269 cri.go:89] found id: ""
	I1213 11:25:49.599216 1084269 logs.go:282] 0 containers: []
	W1213 11:25:49.599225 1084269 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:25:49.599232 1084269 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:25:49.599290 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:25:49.624848 1084269 cri.go:89] found id: ""
	I1213 11:25:49.624875 1084269 logs.go:282] 0 containers: []
	W1213 11:25:49.624884 1084269 logs.go:284] No container was found matching "etcd"
	I1213 11:25:49.624890 1084269 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:25:49.624948 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:25:49.650472 1084269 cri.go:89] found id: ""
	I1213 11:25:49.650500 1084269 logs.go:282] 0 containers: []
	W1213 11:25:49.650509 1084269 logs.go:284] No container was found matching "coredns"
	I1213 11:25:49.650515 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:25:49.650576 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:25:49.676451 1084269 cri.go:89] found id: ""
	I1213 11:25:49.676474 1084269 logs.go:282] 0 containers: []
	W1213 11:25:49.676484 1084269 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:25:49.676492 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:25:49.676556 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:25:49.705910 1084269 cri.go:89] found id: ""
	I1213 11:25:49.705938 1084269 logs.go:282] 0 containers: []
	W1213 11:25:49.705947 1084269 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:25:49.705953 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:25:49.706012 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:25:49.736502 1084269 cri.go:89] found id: ""
	I1213 11:25:49.736538 1084269 logs.go:282] 0 containers: []
	W1213 11:25:49.736546 1084269 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:25:49.736567 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:25:49.736649 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:25:49.762033 1084269 cri.go:89] found id: ""
	I1213 11:25:49.762100 1084269 logs.go:282] 0 containers: []
	W1213 11:25:49.762124 1084269 logs.go:284] No container was found matching "kindnet"
	I1213 11:25:49.762143 1084269 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:25:49.762208 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:25:49.786308 1084269 cri.go:89] found id: ""
	I1213 11:25:49.786336 1084269 logs.go:282] 0 containers: []
	W1213 11:25:49.786345 1084269 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:25:49.786355 1084269 logs.go:123] Gathering logs for dmesg ...
	I1213 11:25:49.786370 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:25:49.802672 1084269 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:25:49.802704 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:25:49.869329 1084269 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:25:49.869347 1084269 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:25:49.869360 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:25:49.900368 1084269 logs.go:123] Gathering logs for container status ...
	I1213 11:25:49.900404 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:25:49.932728 1084269 logs.go:123] Gathering logs for kubelet ...
	I1213 11:25:49.932758 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:25:52.502176 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:25:52.512245 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:25:52.512321 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:25:52.538167 1084269 cri.go:89] found id: ""
	I1213 11:25:52.538192 1084269 logs.go:282] 0 containers: []
	W1213 11:25:52.538202 1084269 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:25:52.538208 1084269 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:25:52.538265 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:25:52.563905 1084269 cri.go:89] found id: ""
	I1213 11:25:52.563928 1084269 logs.go:282] 0 containers: []
	W1213 11:25:52.563937 1084269 logs.go:284] No container was found matching "etcd"
	I1213 11:25:52.563943 1084269 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:25:52.564004 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:25:52.589154 1084269 cri.go:89] found id: ""
	I1213 11:25:52.589182 1084269 logs.go:282] 0 containers: []
	W1213 11:25:52.589191 1084269 logs.go:284] No container was found matching "coredns"
	I1213 11:25:52.589198 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:25:52.589256 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:25:52.614488 1084269 cri.go:89] found id: ""
	I1213 11:25:52.614511 1084269 logs.go:282] 0 containers: []
	W1213 11:25:52.614520 1084269 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:25:52.614526 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:25:52.614585 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:25:52.643713 1084269 cri.go:89] found id: ""
	I1213 11:25:52.643734 1084269 logs.go:282] 0 containers: []
	W1213 11:25:52.643743 1084269 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:25:52.643749 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:25:52.643809 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:25:52.673596 1084269 cri.go:89] found id: ""
	I1213 11:25:52.673634 1084269 logs.go:282] 0 containers: []
	W1213 11:25:52.673643 1084269 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:25:52.673649 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:25:52.673708 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:25:52.699273 1084269 cri.go:89] found id: ""
	I1213 11:25:52.699300 1084269 logs.go:282] 0 containers: []
	W1213 11:25:52.699320 1084269 logs.go:284] No container was found matching "kindnet"
	I1213 11:25:52.699327 1084269 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:25:52.699390 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:25:52.727299 1084269 cri.go:89] found id: ""
	I1213 11:25:52.727327 1084269 logs.go:282] 0 containers: []
	W1213 11:25:52.727338 1084269 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:25:52.727348 1084269 logs.go:123] Gathering logs for kubelet ...
	I1213 11:25:52.727359 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:25:52.800230 1084269 logs.go:123] Gathering logs for dmesg ...
	I1213 11:25:52.800264 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:25:52.816317 1084269 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:25:52.816346 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:25:52.883901 1084269 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:25:52.883966 1084269 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:25:52.883986 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:25:52.915342 1084269 logs.go:123] Gathering logs for container status ...
	I1213 11:25:52.915380 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:25:55.451903 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:25:55.462844 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:25:55.462914 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:25:55.502070 1084269 cri.go:89] found id: ""
	I1213 11:25:55.502098 1084269 logs.go:282] 0 containers: []
	W1213 11:25:55.502107 1084269 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:25:55.502113 1084269 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:25:55.502179 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:25:55.541829 1084269 cri.go:89] found id: ""
	I1213 11:25:55.541857 1084269 logs.go:282] 0 containers: []
	W1213 11:25:55.541867 1084269 logs.go:284] No container was found matching "etcd"
	I1213 11:25:55.541873 1084269 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:25:55.541930 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:25:55.569112 1084269 cri.go:89] found id: ""
	I1213 11:25:55.569139 1084269 logs.go:282] 0 containers: []
	W1213 11:25:55.569148 1084269 logs.go:284] No container was found matching "coredns"
	I1213 11:25:55.569154 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:25:55.569210 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:25:55.602982 1084269 cri.go:89] found id: ""
	I1213 11:25:55.603010 1084269 logs.go:282] 0 containers: []
	W1213 11:25:55.603018 1084269 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:25:55.603027 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:25:55.603083 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:25:55.630618 1084269 cri.go:89] found id: ""
	I1213 11:25:55.630646 1084269 logs.go:282] 0 containers: []
	W1213 11:25:55.630655 1084269 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:25:55.630661 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:25:55.630716 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:25:55.662858 1084269 cri.go:89] found id: ""
	I1213 11:25:55.662886 1084269 logs.go:282] 0 containers: []
	W1213 11:25:55.662894 1084269 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:25:55.662900 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:25:55.662956 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:25:55.688292 1084269 cri.go:89] found id: ""
	I1213 11:25:55.688319 1084269 logs.go:282] 0 containers: []
	W1213 11:25:55.688329 1084269 logs.go:284] No container was found matching "kindnet"
	I1213 11:25:55.688334 1084269 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:25:55.688389 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:25:55.719497 1084269 cri.go:89] found id: ""
	I1213 11:25:55.719524 1084269 logs.go:282] 0 containers: []
	W1213 11:25:55.719533 1084269 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:25:55.719542 1084269 logs.go:123] Gathering logs for kubelet ...
	I1213 11:25:55.719563 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:25:55.794889 1084269 logs.go:123] Gathering logs for dmesg ...
	I1213 11:25:55.794928 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:25:55.815777 1084269 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:25:55.815804 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:25:55.898279 1084269 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:25:55.898302 1084269 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:25:55.898314 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:25:55.929765 1084269 logs.go:123] Gathering logs for container status ...
	I1213 11:25:55.929800 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:25:58.466578 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:25:58.476615 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:25:58.476691 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:25:58.504512 1084269 cri.go:89] found id: ""
	I1213 11:25:58.504539 1084269 logs.go:282] 0 containers: []
	W1213 11:25:58.504548 1084269 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:25:58.504555 1084269 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:25:58.504662 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:25:58.533041 1084269 cri.go:89] found id: ""
	I1213 11:25:58.533066 1084269 logs.go:282] 0 containers: []
	W1213 11:25:58.533076 1084269 logs.go:284] No container was found matching "etcd"
	I1213 11:25:58.533082 1084269 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:25:58.533154 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:25:58.559059 1084269 cri.go:89] found id: ""
	I1213 11:25:58.559085 1084269 logs.go:282] 0 containers: []
	W1213 11:25:58.559094 1084269 logs.go:284] No container was found matching "coredns"
	I1213 11:25:58.559101 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:25:58.559166 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:25:58.583501 1084269 cri.go:89] found id: ""
	I1213 11:25:58.583526 1084269 logs.go:282] 0 containers: []
	W1213 11:25:58.583534 1084269 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:25:58.583541 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:25:58.583598 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:25:58.609418 1084269 cri.go:89] found id: ""
	I1213 11:25:58.609445 1084269 logs.go:282] 0 containers: []
	W1213 11:25:58.609454 1084269 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:25:58.609461 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:25:58.609521 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:25:58.635632 1084269 cri.go:89] found id: ""
	I1213 11:25:58.635659 1084269 logs.go:282] 0 containers: []
	W1213 11:25:58.635668 1084269 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:25:58.635675 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:25:58.635752 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:25:58.664573 1084269 cri.go:89] found id: ""
	I1213 11:25:58.664600 1084269 logs.go:282] 0 containers: []
	W1213 11:25:58.664609 1084269 logs.go:284] No container was found matching "kindnet"
	I1213 11:25:58.664618 1084269 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:25:58.664675 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:25:58.690197 1084269 cri.go:89] found id: ""
	I1213 11:25:58.690223 1084269 logs.go:282] 0 containers: []
	W1213 11:25:58.690234 1084269 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:25:58.690243 1084269 logs.go:123] Gathering logs for kubelet ...
	I1213 11:25:58.690283 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:25:58.772051 1084269 logs.go:123] Gathering logs for dmesg ...
	I1213 11:25:58.772094 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:25:58.788635 1084269 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:25:58.788667 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:25:58.853058 1084269 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:25:58.853081 1084269 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:25:58.853094 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:25:58.884262 1084269 logs.go:123] Gathering logs for container status ...
	I1213 11:25:58.884297 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:26:01.413322 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:26:01.423762 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:26:01.423842 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:26:01.449348 1084269 cri.go:89] found id: ""
	I1213 11:26:01.449374 1084269 logs.go:282] 0 containers: []
	W1213 11:26:01.449384 1084269 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:26:01.449391 1084269 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:26:01.449454 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:26:01.474890 1084269 cri.go:89] found id: ""
	I1213 11:26:01.474915 1084269 logs.go:282] 0 containers: []
	W1213 11:26:01.474924 1084269 logs.go:284] No container was found matching "etcd"
	I1213 11:26:01.474931 1084269 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:26:01.474988 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:26:01.500800 1084269 cri.go:89] found id: ""
	I1213 11:26:01.500825 1084269 logs.go:282] 0 containers: []
	W1213 11:26:01.500835 1084269 logs.go:284] No container was found matching "coredns"
	I1213 11:26:01.500841 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:26:01.500899 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:26:01.526505 1084269 cri.go:89] found id: ""
	I1213 11:26:01.526532 1084269 logs.go:282] 0 containers: []
	W1213 11:26:01.526541 1084269 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:26:01.526548 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:26:01.526607 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:26:01.551547 1084269 cri.go:89] found id: ""
	I1213 11:26:01.551628 1084269 logs.go:282] 0 containers: []
	W1213 11:26:01.551644 1084269 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:26:01.551651 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:26:01.551721 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:26:01.576888 1084269 cri.go:89] found id: ""
	I1213 11:26:01.576916 1084269 logs.go:282] 0 containers: []
	W1213 11:26:01.576925 1084269 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:26:01.576932 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:26:01.576995 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:26:01.604789 1084269 cri.go:89] found id: ""
	I1213 11:26:01.604816 1084269 logs.go:282] 0 containers: []
	W1213 11:26:01.604828 1084269 logs.go:284] No container was found matching "kindnet"
	I1213 11:26:01.604834 1084269 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:26:01.604895 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:26:01.633373 1084269 cri.go:89] found id: ""
	I1213 11:26:01.633400 1084269 logs.go:282] 0 containers: []
	W1213 11:26:01.633409 1084269 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:26:01.633418 1084269 logs.go:123] Gathering logs for kubelet ...
	I1213 11:26:01.633437 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:26:01.700356 1084269 logs.go:123] Gathering logs for dmesg ...
	I1213 11:26:01.700393 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:26:01.725641 1084269 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:26:01.725670 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:26:01.794678 1084269 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:26:01.794711 1084269 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:26:01.794751 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:26:01.826141 1084269 logs.go:123] Gathering logs for container status ...
	I1213 11:26:01.826179 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:26:04.357700 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:26:04.368284 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:26:04.368356 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:26:04.393283 1084269 cri.go:89] found id: ""
	I1213 11:26:04.393308 1084269 logs.go:282] 0 containers: []
	W1213 11:26:04.393317 1084269 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:26:04.393324 1084269 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:26:04.393383 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:26:04.420356 1084269 cri.go:89] found id: ""
	I1213 11:26:04.420378 1084269 logs.go:282] 0 containers: []
	W1213 11:26:04.420387 1084269 logs.go:284] No container was found matching "etcd"
	I1213 11:26:04.420394 1084269 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:26:04.420451 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:26:04.447404 1084269 cri.go:89] found id: ""
	I1213 11:26:04.447431 1084269 logs.go:282] 0 containers: []
	W1213 11:26:04.447440 1084269 logs.go:284] No container was found matching "coredns"
	I1213 11:26:04.447447 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:26:04.447509 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:26:04.477632 1084269 cri.go:89] found id: ""
	I1213 11:26:04.477657 1084269 logs.go:282] 0 containers: []
	W1213 11:26:04.477666 1084269 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:26:04.477672 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:26:04.477734 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:26:04.504821 1084269 cri.go:89] found id: ""
	I1213 11:26:04.504844 1084269 logs.go:282] 0 containers: []
	W1213 11:26:04.504853 1084269 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:26:04.504859 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:26:04.504976 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:26:04.530907 1084269 cri.go:89] found id: ""
	I1213 11:26:04.530931 1084269 logs.go:282] 0 containers: []
	W1213 11:26:04.530940 1084269 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:26:04.530949 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:26:04.531009 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:26:04.558576 1084269 cri.go:89] found id: ""
	I1213 11:26:04.558602 1084269 logs.go:282] 0 containers: []
	W1213 11:26:04.558611 1084269 logs.go:284] No container was found matching "kindnet"
	I1213 11:26:04.558617 1084269 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:26:04.558684 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:26:04.585132 1084269 cri.go:89] found id: ""
	I1213 11:26:04.585213 1084269 logs.go:282] 0 containers: []
	W1213 11:26:04.585236 1084269 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:26:04.585260 1084269 logs.go:123] Gathering logs for kubelet ...
	I1213 11:26:04.585304 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:26:04.652357 1084269 logs.go:123] Gathering logs for dmesg ...
	I1213 11:26:04.652394 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:26:04.669186 1084269 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:26:04.669272 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:26:04.736040 1084269 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:26:04.736063 1084269 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:26:04.736078 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:26:04.766640 1084269 logs.go:123] Gathering logs for container status ...
	I1213 11:26:04.766676 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:26:07.296441 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:26:07.306570 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:26:07.306640 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:26:07.331873 1084269 cri.go:89] found id: ""
	I1213 11:26:07.331898 1084269 logs.go:282] 0 containers: []
	W1213 11:26:07.331906 1084269 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:26:07.331912 1084269 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:26:07.331977 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:26:07.356478 1084269 cri.go:89] found id: ""
	I1213 11:26:07.356501 1084269 logs.go:282] 0 containers: []
	W1213 11:26:07.356509 1084269 logs.go:284] No container was found matching "etcd"
	I1213 11:26:07.356515 1084269 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:26:07.356572 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:26:07.382832 1084269 cri.go:89] found id: ""
	I1213 11:26:07.382858 1084269 logs.go:282] 0 containers: []
	W1213 11:26:07.382867 1084269 logs.go:284] No container was found matching "coredns"
	I1213 11:26:07.382873 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:26:07.382933 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:26:07.410723 1084269 cri.go:89] found id: ""
	I1213 11:26:07.410747 1084269 logs.go:282] 0 containers: []
	W1213 11:26:07.410755 1084269 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:26:07.410762 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:26:07.410827 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:26:07.442340 1084269 cri.go:89] found id: ""
	I1213 11:26:07.442367 1084269 logs.go:282] 0 containers: []
	W1213 11:26:07.442377 1084269 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:26:07.442384 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:26:07.442446 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:26:07.467316 1084269 cri.go:89] found id: ""
	I1213 11:26:07.467341 1084269 logs.go:282] 0 containers: []
	W1213 11:26:07.467350 1084269 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:26:07.467356 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:26:07.467415 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:26:07.496328 1084269 cri.go:89] found id: ""
	I1213 11:26:07.496351 1084269 logs.go:282] 0 containers: []
	W1213 11:26:07.496359 1084269 logs.go:284] No container was found matching "kindnet"
	I1213 11:26:07.496365 1084269 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:26:07.496428 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:26:07.523959 1084269 cri.go:89] found id: ""
	I1213 11:26:07.523982 1084269 logs.go:282] 0 containers: []
	W1213 11:26:07.523990 1084269 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:26:07.523999 1084269 logs.go:123] Gathering logs for kubelet ...
	I1213 11:26:07.524016 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:26:07.591825 1084269 logs.go:123] Gathering logs for dmesg ...
	I1213 11:26:07.591862 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:26:07.608369 1084269 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:26:07.608449 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:26:07.683106 1084269 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:26:07.683127 1084269 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:26:07.683141 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:26:07.713386 1084269 logs.go:123] Gathering logs for container status ...
	I1213 11:26:07.713430 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:26:10.243924 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:26:10.255571 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:26:10.255646 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:26:10.292398 1084269 cri.go:89] found id: ""
	I1213 11:26:10.292425 1084269 logs.go:282] 0 containers: []
	W1213 11:26:10.292434 1084269 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:26:10.292440 1084269 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:26:10.292514 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:26:10.319374 1084269 cri.go:89] found id: ""
	I1213 11:26:10.319400 1084269 logs.go:282] 0 containers: []
	W1213 11:26:10.319409 1084269 logs.go:284] No container was found matching "etcd"
	I1213 11:26:10.319416 1084269 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:26:10.319473 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:26:10.345456 1084269 cri.go:89] found id: ""
	I1213 11:26:10.345482 1084269 logs.go:282] 0 containers: []
	W1213 11:26:10.345491 1084269 logs.go:284] No container was found matching "coredns"
	I1213 11:26:10.345498 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:26:10.345587 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:26:10.373275 1084269 cri.go:89] found id: ""
	I1213 11:26:10.373301 1084269 logs.go:282] 0 containers: []
	W1213 11:26:10.373311 1084269 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:26:10.373318 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:26:10.373389 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:26:10.399298 1084269 cri.go:89] found id: ""
	I1213 11:26:10.399327 1084269 logs.go:282] 0 containers: []
	W1213 11:26:10.399336 1084269 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:26:10.399343 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:26:10.399400 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:26:10.431605 1084269 cri.go:89] found id: ""
	I1213 11:26:10.431630 1084269 logs.go:282] 0 containers: []
	W1213 11:26:10.431639 1084269 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:26:10.431646 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:26:10.431703 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:26:10.457096 1084269 cri.go:89] found id: ""
	I1213 11:26:10.457121 1084269 logs.go:282] 0 containers: []
	W1213 11:26:10.457130 1084269 logs.go:284] No container was found matching "kindnet"
	I1213 11:26:10.457136 1084269 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:26:10.457193 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:26:10.483125 1084269 cri.go:89] found id: ""
	I1213 11:26:10.483151 1084269 logs.go:282] 0 containers: []
	W1213 11:26:10.483160 1084269 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:26:10.483172 1084269 logs.go:123] Gathering logs for dmesg ...
	I1213 11:26:10.483184 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:26:10.499026 1084269 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:26:10.499055 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:26:10.564156 1084269 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:26:10.564175 1084269 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:26:10.564225 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:26:10.595441 1084269 logs.go:123] Gathering logs for container status ...
	I1213 11:26:10.595478 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:26:10.624196 1084269 logs.go:123] Gathering logs for kubelet ...
	I1213 11:26:10.624222 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:26:13.192839 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:26:13.207524 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:26:13.207589 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:26:13.254859 1084269 cri.go:89] found id: ""
	I1213 11:26:13.254887 1084269 logs.go:282] 0 containers: []
	W1213 11:26:13.254897 1084269 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:26:13.254903 1084269 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:26:13.254966 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:26:13.287331 1084269 cri.go:89] found id: ""
	I1213 11:26:13.287354 1084269 logs.go:282] 0 containers: []
	W1213 11:26:13.287363 1084269 logs.go:284] No container was found matching "etcd"
	I1213 11:26:13.287369 1084269 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:26:13.287428 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:26:13.323580 1084269 cri.go:89] found id: ""
	I1213 11:26:13.323603 1084269 logs.go:282] 0 containers: []
	W1213 11:26:13.323612 1084269 logs.go:284] No container was found matching "coredns"
	I1213 11:26:13.323619 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:26:13.323685 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:26:13.350138 1084269 cri.go:89] found id: ""
	I1213 11:26:13.350166 1084269 logs.go:282] 0 containers: []
	W1213 11:26:13.350175 1084269 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:26:13.350181 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:26:13.350245 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:26:13.381278 1084269 cri.go:89] found id: ""
	I1213 11:26:13.381306 1084269 logs.go:282] 0 containers: []
	W1213 11:26:13.381315 1084269 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:26:13.381322 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:26:13.381386 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:26:13.407466 1084269 cri.go:89] found id: ""
	I1213 11:26:13.407492 1084269 logs.go:282] 0 containers: []
	W1213 11:26:13.407502 1084269 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:26:13.407510 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:26:13.407567 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:26:13.440830 1084269 cri.go:89] found id: ""
	I1213 11:26:13.440857 1084269 logs.go:282] 0 containers: []
	W1213 11:26:13.440866 1084269 logs.go:284] No container was found matching "kindnet"
	I1213 11:26:13.440872 1084269 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:26:13.440929 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:26:13.466926 1084269 cri.go:89] found id: ""
	I1213 11:26:13.466953 1084269 logs.go:282] 0 containers: []
	W1213 11:26:13.466963 1084269 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:26:13.466972 1084269 logs.go:123] Gathering logs for dmesg ...
	I1213 11:26:13.466984 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:26:13.483145 1084269 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:26:13.483174 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:26:13.543468 1084269 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:26:13.543489 1084269 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:26:13.543500 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:26:13.574023 1084269 logs.go:123] Gathering logs for container status ...
	I1213 11:26:13.574057 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:26:13.602003 1084269 logs.go:123] Gathering logs for kubelet ...
	I1213 11:26:13.602033 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:26:16.168567 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:26:16.179921 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:26:16.180001 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:26:16.215880 1084269 cri.go:89] found id: ""
	I1213 11:26:16.215914 1084269 logs.go:282] 0 containers: []
	W1213 11:26:16.215925 1084269 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:26:16.215932 1084269 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:26:16.216000 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:26:16.295069 1084269 cri.go:89] found id: ""
	I1213 11:26:16.295101 1084269 logs.go:282] 0 containers: []
	W1213 11:26:16.295114 1084269 logs.go:284] No container was found matching "etcd"
	I1213 11:26:16.295120 1084269 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:26:16.295189 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:26:16.322178 1084269 cri.go:89] found id: ""
	I1213 11:26:16.322200 1084269 logs.go:282] 0 containers: []
	W1213 11:26:16.322208 1084269 logs.go:284] No container was found matching "coredns"
	I1213 11:26:16.322215 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:26:16.322288 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:26:16.355878 1084269 cri.go:89] found id: ""
	I1213 11:26:16.355953 1084269 logs.go:282] 0 containers: []
	W1213 11:26:16.355986 1084269 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:26:16.356007 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:26:16.356091 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:26:16.388564 1084269 cri.go:89] found id: ""
	I1213 11:26:16.388640 1084269 logs.go:282] 0 containers: []
	W1213 11:26:16.388662 1084269 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:26:16.388683 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:26:16.388772 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:26:16.417264 1084269 cri.go:89] found id: ""
	I1213 11:26:16.417338 1084269 logs.go:282] 0 containers: []
	W1213 11:26:16.417360 1084269 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:26:16.417382 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:26:16.417471 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:26:16.445945 1084269 cri.go:89] found id: ""
	I1213 11:26:16.446023 1084269 logs.go:282] 0 containers: []
	W1213 11:26:16.446055 1084269 logs.go:284] No container was found matching "kindnet"
	I1213 11:26:16.446075 1084269 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:26:16.446155 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:26:16.484748 1084269 cri.go:89] found id: ""
	I1213 11:26:16.484770 1084269 logs.go:282] 0 containers: []
	W1213 11:26:16.484778 1084269 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:26:16.484845 1084269 logs.go:123] Gathering logs for kubelet ...
	I1213 11:26:16.484860 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:26:16.591160 1084269 logs.go:123] Gathering logs for dmesg ...
	I1213 11:26:16.591222 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:26:16.608858 1084269 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:26:16.608995 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:26:16.702189 1084269 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:26:16.702207 1084269 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:26:16.702219 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:26:16.738546 1084269 logs.go:123] Gathering logs for container status ...
	I1213 11:26:16.738576 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:26:19.275947 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:26:19.288038 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:26:19.288113 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:26:19.330916 1084269 cri.go:89] found id: ""
	I1213 11:26:19.330941 1084269 logs.go:282] 0 containers: []
	W1213 11:26:19.330950 1084269 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:26:19.330957 1084269 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:26:19.331013 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:26:19.358481 1084269 cri.go:89] found id: ""
	I1213 11:26:19.358504 1084269 logs.go:282] 0 containers: []
	W1213 11:26:19.358512 1084269 logs.go:284] No container was found matching "etcd"
	I1213 11:26:19.358518 1084269 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:26:19.358577 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:26:19.394144 1084269 cri.go:89] found id: ""
	I1213 11:26:19.394166 1084269 logs.go:282] 0 containers: []
	W1213 11:26:19.394175 1084269 logs.go:284] No container was found matching "coredns"
	I1213 11:26:19.394182 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:26:19.394240 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:26:19.421995 1084269 cri.go:89] found id: ""
	I1213 11:26:19.422022 1084269 logs.go:282] 0 containers: []
	W1213 11:26:19.422031 1084269 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:26:19.422038 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:26:19.422096 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:26:19.455927 1084269 cri.go:89] found id: ""
	I1213 11:26:19.455954 1084269 logs.go:282] 0 containers: []
	W1213 11:26:19.455964 1084269 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:26:19.455970 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:26:19.456026 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:26:19.488654 1084269 cri.go:89] found id: ""
	I1213 11:26:19.488684 1084269 logs.go:282] 0 containers: []
	W1213 11:26:19.488693 1084269 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:26:19.488700 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:26:19.488759 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:26:19.526775 1084269 cri.go:89] found id: ""
	I1213 11:26:19.526801 1084269 logs.go:282] 0 containers: []
	W1213 11:26:19.526810 1084269 logs.go:284] No container was found matching "kindnet"
	I1213 11:26:19.526816 1084269 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:26:19.526873 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:26:19.557426 1084269 cri.go:89] found id: ""
	I1213 11:26:19.557449 1084269 logs.go:282] 0 containers: []
	W1213 11:26:19.557462 1084269 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:26:19.557472 1084269 logs.go:123] Gathering logs for dmesg ...
	I1213 11:26:19.557484 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:26:19.576134 1084269 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:26:19.576164 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:26:19.658839 1084269 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:26:19.658864 1084269 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:26:19.658876 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:26:19.693365 1084269 logs.go:123] Gathering logs for container status ...
	I1213 11:26:19.693443 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:26:19.728097 1084269 logs.go:123] Gathering logs for kubelet ...
	I1213 11:26:19.728120 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:26:22.311385 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:26:22.321570 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:26:22.321643 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:26:22.348129 1084269 cri.go:89] found id: ""
	I1213 11:26:22.348155 1084269 logs.go:282] 0 containers: []
	W1213 11:26:22.348165 1084269 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:26:22.348172 1084269 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:26:22.348228 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:26:22.378339 1084269 cri.go:89] found id: ""
	I1213 11:26:22.378368 1084269 logs.go:282] 0 containers: []
	W1213 11:26:22.378378 1084269 logs.go:284] No container was found matching "etcd"
	I1213 11:26:22.378384 1084269 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:26:22.378446 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:26:22.404151 1084269 cri.go:89] found id: ""
	I1213 11:26:22.404178 1084269 logs.go:282] 0 containers: []
	W1213 11:26:22.404186 1084269 logs.go:284] No container was found matching "coredns"
	I1213 11:26:22.404192 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:26:22.404255 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:26:22.434145 1084269 cri.go:89] found id: ""
	I1213 11:26:22.434167 1084269 logs.go:282] 0 containers: []
	W1213 11:26:22.434176 1084269 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:26:22.434182 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:26:22.434239 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:26:22.459015 1084269 cri.go:89] found id: ""
	I1213 11:26:22.459043 1084269 logs.go:282] 0 containers: []
	W1213 11:26:22.459053 1084269 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:26:22.459059 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:26:22.459120 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:26:22.489355 1084269 cri.go:89] found id: ""
	I1213 11:26:22.489379 1084269 logs.go:282] 0 containers: []
	W1213 11:26:22.489388 1084269 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:26:22.489394 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:26:22.489453 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:26:22.515756 1084269 cri.go:89] found id: ""
	I1213 11:26:22.515779 1084269 logs.go:282] 0 containers: []
	W1213 11:26:22.515788 1084269 logs.go:284] No container was found matching "kindnet"
	I1213 11:26:22.515793 1084269 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:26:22.515854 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:26:22.549334 1084269 cri.go:89] found id: ""
	I1213 11:26:22.549361 1084269 logs.go:282] 0 containers: []
	W1213 11:26:22.549370 1084269 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:26:22.549380 1084269 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:26:22.549391 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:26:22.580747 1084269 logs.go:123] Gathering logs for container status ...
	I1213 11:26:22.580785 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:26:22.609339 1084269 logs.go:123] Gathering logs for kubelet ...
	I1213 11:26:22.609417 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:26:22.685569 1084269 logs.go:123] Gathering logs for dmesg ...
	I1213 11:26:22.685632 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:26:22.704559 1084269 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:26:22.704596 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:26:22.788664 1084269 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:26:25.289670 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:26:25.299536 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:26:25.299606 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:26:25.324544 1084269 cri.go:89] found id: ""
	I1213 11:26:25.324570 1084269 logs.go:282] 0 containers: []
	W1213 11:26:25.324579 1084269 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:26:25.324586 1084269 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:26:25.324644 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:26:25.352150 1084269 cri.go:89] found id: ""
	I1213 11:26:25.352178 1084269 logs.go:282] 0 containers: []
	W1213 11:26:25.352188 1084269 logs.go:284] No container was found matching "etcd"
	I1213 11:26:25.352195 1084269 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:26:25.352267 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:26:25.381628 1084269 cri.go:89] found id: ""
	I1213 11:26:25.381655 1084269 logs.go:282] 0 containers: []
	W1213 11:26:25.381671 1084269 logs.go:284] No container was found matching "coredns"
	I1213 11:26:25.381677 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:26:25.381736 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:26:25.407153 1084269 cri.go:89] found id: ""
	I1213 11:26:25.407179 1084269 logs.go:282] 0 containers: []
	W1213 11:26:25.407188 1084269 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:26:25.407194 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:26:25.407251 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:26:25.437350 1084269 cri.go:89] found id: ""
	I1213 11:26:25.437373 1084269 logs.go:282] 0 containers: []
	W1213 11:26:25.437382 1084269 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:26:25.437388 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:26:25.437448 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:26:25.468721 1084269 cri.go:89] found id: ""
	I1213 11:26:25.468747 1084269 logs.go:282] 0 containers: []
	W1213 11:26:25.468756 1084269 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:26:25.468762 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:26:25.468822 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:26:25.495004 1084269 cri.go:89] found id: ""
	I1213 11:26:25.495032 1084269 logs.go:282] 0 containers: []
	W1213 11:26:25.495041 1084269 logs.go:284] No container was found matching "kindnet"
	I1213 11:26:25.495047 1084269 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:26:25.495110 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:26:25.521156 1084269 cri.go:89] found id: ""
	I1213 11:26:25.521183 1084269 logs.go:282] 0 containers: []
	W1213 11:26:25.521193 1084269 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:26:25.521202 1084269 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:26:25.521219 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:26:25.551960 1084269 logs.go:123] Gathering logs for container status ...
	I1213 11:26:25.551993 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:26:25.580340 1084269 logs.go:123] Gathering logs for kubelet ...
	I1213 11:26:25.580370 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:26:25.646815 1084269 logs.go:123] Gathering logs for dmesg ...
	I1213 11:26:25.646852 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:26:25.663521 1084269 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:26:25.663553 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:26:25.744513 1084269 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:26:28.244731 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:26:28.255907 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:26:28.255972 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:26:28.285493 1084269 cri.go:89] found id: ""
	I1213 11:26:28.285520 1084269 logs.go:282] 0 containers: []
	W1213 11:26:28.285529 1084269 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:26:28.285578 1084269 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:26:28.285638 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:26:28.310907 1084269 cri.go:89] found id: ""
	I1213 11:26:28.310935 1084269 logs.go:282] 0 containers: []
	W1213 11:26:28.310944 1084269 logs.go:284] No container was found matching "etcd"
	I1213 11:26:28.310950 1084269 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:26:28.311009 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:26:28.337072 1084269 cri.go:89] found id: ""
	I1213 11:26:28.337094 1084269 logs.go:282] 0 containers: []
	W1213 11:26:28.337104 1084269 logs.go:284] No container was found matching "coredns"
	I1213 11:26:28.337109 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:26:28.337189 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:26:28.364276 1084269 cri.go:89] found id: ""
	I1213 11:26:28.364299 1084269 logs.go:282] 0 containers: []
	W1213 11:26:28.364307 1084269 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:26:28.364314 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:26:28.364378 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:26:28.393903 1084269 cri.go:89] found id: ""
	I1213 11:26:28.393926 1084269 logs.go:282] 0 containers: []
	W1213 11:26:28.393934 1084269 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:26:28.393941 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:26:28.394000 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:26:28.419605 1084269 cri.go:89] found id: ""
	I1213 11:26:28.419631 1084269 logs.go:282] 0 containers: []
	W1213 11:26:28.419640 1084269 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:26:28.419647 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:26:28.419708 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:26:28.449650 1084269 cri.go:89] found id: ""
	I1213 11:26:28.449681 1084269 logs.go:282] 0 containers: []
	W1213 11:26:28.449690 1084269 logs.go:284] No container was found matching "kindnet"
	I1213 11:26:28.449696 1084269 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:26:28.449757 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:26:28.476301 1084269 cri.go:89] found id: ""
	I1213 11:26:28.476325 1084269 logs.go:282] 0 containers: []
	W1213 11:26:28.476333 1084269 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:26:28.476342 1084269 logs.go:123] Gathering logs for container status ...
	I1213 11:26:28.476353 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:26:28.508056 1084269 logs.go:123] Gathering logs for kubelet ...
	I1213 11:26:28.508086 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:26:28.575457 1084269 logs.go:123] Gathering logs for dmesg ...
	I1213 11:26:28.575495 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:26:28.592493 1084269 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:26:28.592524 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:26:28.659516 1084269 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:26:28.659537 1084269 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:26:28.659549 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:26:31.190588 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:26:31.200656 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:26:31.200728 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:26:31.253002 1084269 cri.go:89] found id: ""
	I1213 11:26:31.253039 1084269 logs.go:282] 0 containers: []
	W1213 11:26:31.253049 1084269 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:26:31.253056 1084269 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:26:31.253129 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:26:31.288011 1084269 cri.go:89] found id: ""
	I1213 11:26:31.288038 1084269 logs.go:282] 0 containers: []
	W1213 11:26:31.288047 1084269 logs.go:284] No container was found matching "etcd"
	I1213 11:26:31.288053 1084269 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:26:31.288119 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:26:31.318533 1084269 cri.go:89] found id: ""
	I1213 11:26:31.318556 1084269 logs.go:282] 0 containers: []
	W1213 11:26:31.318565 1084269 logs.go:284] No container was found matching "coredns"
	I1213 11:26:31.318571 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:26:31.318628 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:26:31.349277 1084269 cri.go:89] found id: ""
	I1213 11:26:31.349300 1084269 logs.go:282] 0 containers: []
	W1213 11:26:31.349309 1084269 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:26:31.349315 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:26:31.349372 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:26:31.381808 1084269 cri.go:89] found id: ""
	I1213 11:26:31.381831 1084269 logs.go:282] 0 containers: []
	W1213 11:26:31.381841 1084269 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:26:31.381848 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:26:31.381908 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:26:31.407222 1084269 cri.go:89] found id: ""
	I1213 11:26:31.407246 1084269 logs.go:282] 0 containers: []
	W1213 11:26:31.407256 1084269 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:26:31.407263 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:26:31.407325 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:26:31.442601 1084269 cri.go:89] found id: ""
	I1213 11:26:31.442679 1084269 logs.go:282] 0 containers: []
	W1213 11:26:31.442702 1084269 logs.go:284] No container was found matching "kindnet"
	I1213 11:26:31.442716 1084269 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:26:31.442780 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:26:31.467552 1084269 cri.go:89] found id: ""
	I1213 11:26:31.467575 1084269 logs.go:282] 0 containers: []
	W1213 11:26:31.467583 1084269 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:26:31.467594 1084269 logs.go:123] Gathering logs for container status ...
	I1213 11:26:31.467607 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:26:31.495169 1084269 logs.go:123] Gathering logs for kubelet ...
	I1213 11:26:31.495193 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:26:31.564771 1084269 logs.go:123] Gathering logs for dmesg ...
	I1213 11:26:31.564811 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:26:31.580503 1084269 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:26:31.580539 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:26:31.641330 1084269 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:26:31.641362 1084269 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:26:31.641374 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:26:34.171571 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:26:34.182057 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:26:34.182140 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:26:34.207764 1084269 cri.go:89] found id: ""
	I1213 11:26:34.207789 1084269 logs.go:282] 0 containers: []
	W1213 11:26:34.207798 1084269 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:26:34.207804 1084269 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:26:34.207864 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:26:34.243451 1084269 cri.go:89] found id: ""
	I1213 11:26:34.243474 1084269 logs.go:282] 0 containers: []
	W1213 11:26:34.243483 1084269 logs.go:284] No container was found matching "etcd"
	I1213 11:26:34.243499 1084269 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:26:34.243558 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:26:34.278860 1084269 cri.go:89] found id: ""
	I1213 11:26:34.278888 1084269 logs.go:282] 0 containers: []
	W1213 11:26:34.278896 1084269 logs.go:284] No container was found matching "coredns"
	I1213 11:26:34.278902 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:26:34.278959 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:26:34.311087 1084269 cri.go:89] found id: ""
	I1213 11:26:34.311112 1084269 logs.go:282] 0 containers: []
	W1213 11:26:34.311121 1084269 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:26:34.311127 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:26:34.311182 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:26:34.335757 1084269 cri.go:89] found id: ""
	I1213 11:26:34.335784 1084269 logs.go:282] 0 containers: []
	W1213 11:26:34.335794 1084269 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:26:34.335800 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:26:34.335866 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:26:34.361839 1084269 cri.go:89] found id: ""
	I1213 11:26:34.361864 1084269 logs.go:282] 0 containers: []
	W1213 11:26:34.361874 1084269 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:26:34.361881 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:26:34.361942 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:26:34.387416 1084269 cri.go:89] found id: ""
	I1213 11:26:34.387439 1084269 logs.go:282] 0 containers: []
	W1213 11:26:34.387447 1084269 logs.go:284] No container was found matching "kindnet"
	I1213 11:26:34.387453 1084269 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:26:34.387511 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:26:34.411573 1084269 cri.go:89] found id: ""
	I1213 11:26:34.411601 1084269 logs.go:282] 0 containers: []
	W1213 11:26:34.411610 1084269 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:26:34.411620 1084269 logs.go:123] Gathering logs for kubelet ...
	I1213 11:26:34.411633 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:26:34.483339 1084269 logs.go:123] Gathering logs for dmesg ...
	I1213 11:26:34.483377 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:26:34.499996 1084269 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:26:34.500024 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:26:34.566651 1084269 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:26:34.566723 1084269 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:26:34.566753 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:26:34.596681 1084269 logs.go:123] Gathering logs for container status ...
	I1213 11:26:34.596717 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:26:37.126725 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:26:37.138487 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:26:37.138557 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:26:37.176185 1084269 cri.go:89] found id: ""
	I1213 11:26:37.176215 1084269 logs.go:282] 0 containers: []
	W1213 11:26:37.176224 1084269 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:26:37.176231 1084269 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:26:37.176310 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:26:37.212163 1084269 cri.go:89] found id: ""
	I1213 11:26:37.212193 1084269 logs.go:282] 0 containers: []
	W1213 11:26:37.212207 1084269 logs.go:284] No container was found matching "etcd"
	I1213 11:26:37.212217 1084269 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:26:37.212291 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:26:37.254957 1084269 cri.go:89] found id: ""
	I1213 11:26:37.254986 1084269 logs.go:282] 0 containers: []
	W1213 11:26:37.254998 1084269 logs.go:284] No container was found matching "coredns"
	I1213 11:26:37.255006 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:26:37.255097 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:26:37.346053 1084269 cri.go:89] found id: ""
	I1213 11:26:37.346074 1084269 logs.go:282] 0 containers: []
	W1213 11:26:37.346082 1084269 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:26:37.346088 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:26:37.346147 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:26:37.380452 1084269 cri.go:89] found id: ""
	I1213 11:26:37.380474 1084269 logs.go:282] 0 containers: []
	W1213 11:26:37.380482 1084269 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:26:37.380488 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:26:37.380547 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:26:37.411987 1084269 cri.go:89] found id: ""
	I1213 11:26:37.412020 1084269 logs.go:282] 0 containers: []
	W1213 11:26:37.412028 1084269 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:26:37.412042 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:26:37.412104 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:26:37.452063 1084269 cri.go:89] found id: ""
	I1213 11:26:37.452104 1084269 logs.go:282] 0 containers: []
	W1213 11:26:37.452118 1084269 logs.go:284] No container was found matching "kindnet"
	I1213 11:26:37.452128 1084269 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:26:37.452212 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:26:37.488012 1084269 cri.go:89] found id: ""
	I1213 11:26:37.488042 1084269 logs.go:282] 0 containers: []
	W1213 11:26:37.488051 1084269 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:26:37.488061 1084269 logs.go:123] Gathering logs for kubelet ...
	I1213 11:26:37.488073 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:26:37.569655 1084269 logs.go:123] Gathering logs for dmesg ...
	I1213 11:26:37.569695 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:26:37.587851 1084269 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:26:37.587878 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:26:37.669435 1084269 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:26:37.669462 1084269 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:26:37.669476 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:26:37.701138 1084269 logs.go:123] Gathering logs for container status ...
	I1213 11:26:37.701173 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:26:40.234618 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:26:40.246414 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:26:40.246494 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:26:40.275940 1084269 cri.go:89] found id: ""
	I1213 11:26:40.275978 1084269 logs.go:282] 0 containers: []
	W1213 11:26:40.275987 1084269 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:26:40.275994 1084269 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:26:40.276053 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:26:40.307989 1084269 cri.go:89] found id: ""
	I1213 11:26:40.308018 1084269 logs.go:282] 0 containers: []
	W1213 11:26:40.308028 1084269 logs.go:284] No container was found matching "etcd"
	I1213 11:26:40.308034 1084269 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:26:40.308093 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:26:40.334746 1084269 cri.go:89] found id: ""
	I1213 11:26:40.334774 1084269 logs.go:282] 0 containers: []
	W1213 11:26:40.334783 1084269 logs.go:284] No container was found matching "coredns"
	I1213 11:26:40.334789 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:26:40.334862 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:26:40.366769 1084269 cri.go:89] found id: ""
	I1213 11:26:40.366796 1084269 logs.go:282] 0 containers: []
	W1213 11:26:40.366806 1084269 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:26:40.366813 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:26:40.366872 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:26:40.391989 1084269 cri.go:89] found id: ""
	I1213 11:26:40.392017 1084269 logs.go:282] 0 containers: []
	W1213 11:26:40.392026 1084269 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:26:40.392032 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:26:40.392093 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:26:40.419220 1084269 cri.go:89] found id: ""
	I1213 11:26:40.419248 1084269 logs.go:282] 0 containers: []
	W1213 11:26:40.419258 1084269 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:26:40.419265 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:26:40.419327 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:26:40.447601 1084269 cri.go:89] found id: ""
	I1213 11:26:40.447624 1084269 logs.go:282] 0 containers: []
	W1213 11:26:40.447632 1084269 logs.go:284] No container was found matching "kindnet"
	I1213 11:26:40.447638 1084269 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:26:40.447695 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:26:40.473702 1084269 cri.go:89] found id: ""
	I1213 11:26:40.473725 1084269 logs.go:282] 0 containers: []
	W1213 11:26:40.473732 1084269 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:26:40.473741 1084269 logs.go:123] Gathering logs for kubelet ...
	I1213 11:26:40.473752 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:26:40.546551 1084269 logs.go:123] Gathering logs for dmesg ...
	I1213 11:26:40.546590 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:26:40.562576 1084269 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:26:40.562608 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:26:40.634357 1084269 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:26:40.634380 1084269 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:26:40.634394 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:26:40.665566 1084269 logs.go:123] Gathering logs for container status ...
	I1213 11:26:40.665600 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:26:43.200237 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:26:43.210338 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:26:43.210405 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:26:43.246697 1084269 cri.go:89] found id: ""
	I1213 11:26:43.246724 1084269 logs.go:282] 0 containers: []
	W1213 11:26:43.246732 1084269 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:26:43.246738 1084269 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:26:43.246797 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:26:43.275966 1084269 cri.go:89] found id: ""
	I1213 11:26:43.275989 1084269 logs.go:282] 0 containers: []
	W1213 11:26:43.275998 1084269 logs.go:284] No container was found matching "etcd"
	I1213 11:26:43.276004 1084269 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:26:43.276061 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:26:43.306688 1084269 cri.go:89] found id: ""
	I1213 11:26:43.306711 1084269 logs.go:282] 0 containers: []
	W1213 11:26:43.306719 1084269 logs.go:284] No container was found matching "coredns"
	I1213 11:26:43.306725 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:26:43.306782 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:26:43.336876 1084269 cri.go:89] found id: ""
	I1213 11:26:43.336900 1084269 logs.go:282] 0 containers: []
	W1213 11:26:43.336909 1084269 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:26:43.336915 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:26:43.336973 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:26:43.362368 1084269 cri.go:89] found id: ""
	I1213 11:26:43.362436 1084269 logs.go:282] 0 containers: []
	W1213 11:26:43.362462 1084269 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:26:43.362482 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:26:43.362555 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:26:43.391827 1084269 cri.go:89] found id: ""
	I1213 11:26:43.391899 1084269 logs.go:282] 0 containers: []
	W1213 11:26:43.391923 1084269 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:26:43.391944 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:26:43.392018 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:26:43.417975 1084269 cri.go:89] found id: ""
	I1213 11:26:43.418005 1084269 logs.go:282] 0 containers: []
	W1213 11:26:43.418015 1084269 logs.go:284] No container was found matching "kindnet"
	I1213 11:26:43.418021 1084269 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:26:43.418081 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:26:43.448442 1084269 cri.go:89] found id: ""
	I1213 11:26:43.448469 1084269 logs.go:282] 0 containers: []
	W1213 11:26:43.448479 1084269 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:26:43.448488 1084269 logs.go:123] Gathering logs for kubelet ...
	I1213 11:26:43.448499 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:26:43.514635 1084269 logs.go:123] Gathering logs for dmesg ...
	I1213 11:26:43.514675 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:26:43.532340 1084269 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:26:43.532370 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:26:43.594915 1084269 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:26:43.594935 1084269 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:26:43.594949 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:26:43.624828 1084269 logs.go:123] Gathering logs for container status ...
	I1213 11:26:43.624866 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:26:46.154783 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:26:46.165439 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:26:46.165506 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:26:46.191044 1084269 cri.go:89] found id: ""
	I1213 11:26:46.191069 1084269 logs.go:282] 0 containers: []
	W1213 11:26:46.191077 1084269 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:26:46.191089 1084269 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:26:46.191148 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:26:46.215773 1084269 cri.go:89] found id: ""
	I1213 11:26:46.215808 1084269 logs.go:282] 0 containers: []
	W1213 11:26:46.215818 1084269 logs.go:284] No container was found matching "etcd"
	I1213 11:26:46.215824 1084269 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:26:46.215885 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:26:46.255936 1084269 cri.go:89] found id: ""
	I1213 11:26:46.255962 1084269 logs.go:282] 0 containers: []
	W1213 11:26:46.255972 1084269 logs.go:284] No container was found matching "coredns"
	I1213 11:26:46.255978 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:26:46.256036 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:26:46.283019 1084269 cri.go:89] found id: ""
	I1213 11:26:46.283041 1084269 logs.go:282] 0 containers: []
	W1213 11:26:46.283049 1084269 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:26:46.283055 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:26:46.283111 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:26:46.316834 1084269 cri.go:89] found id: ""
	I1213 11:26:46.316857 1084269 logs.go:282] 0 containers: []
	W1213 11:26:46.316866 1084269 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:26:46.316872 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:26:46.316933 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:26:46.343475 1084269 cri.go:89] found id: ""
	I1213 11:26:46.343502 1084269 logs.go:282] 0 containers: []
	W1213 11:26:46.343512 1084269 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:26:46.343518 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:26:46.343579 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:26:46.367945 1084269 cri.go:89] found id: ""
	I1213 11:26:46.367972 1084269 logs.go:282] 0 containers: []
	W1213 11:26:46.367983 1084269 logs.go:284] No container was found matching "kindnet"
	I1213 11:26:46.367991 1084269 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:26:46.368048 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:26:46.393065 1084269 cri.go:89] found id: ""
	I1213 11:26:46.393091 1084269 logs.go:282] 0 containers: []
	W1213 11:26:46.393105 1084269 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:26:46.393115 1084269 logs.go:123] Gathering logs for kubelet ...
	I1213 11:26:46.393127 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:26:46.465596 1084269 logs.go:123] Gathering logs for dmesg ...
	I1213 11:26:46.465635 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:26:46.481762 1084269 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:26:46.481792 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:26:46.550711 1084269 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:26:46.550781 1084269 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:26:46.550801 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:26:46.582385 1084269 logs.go:123] Gathering logs for container status ...
	I1213 11:26:46.582421 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:26:49.111191 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:26:49.130925 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:26:49.130999 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:26:49.168391 1084269 cri.go:89] found id: ""
	I1213 11:26:49.168416 1084269 logs.go:282] 0 containers: []
	W1213 11:26:49.168424 1084269 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:26:49.168430 1084269 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:26:49.168486 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:26:49.196350 1084269 cri.go:89] found id: ""
	I1213 11:26:49.196371 1084269 logs.go:282] 0 containers: []
	W1213 11:26:49.196380 1084269 logs.go:284] No container was found matching "etcd"
	I1213 11:26:49.196386 1084269 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:26:49.196442 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:26:49.233114 1084269 cri.go:89] found id: ""
	I1213 11:26:49.233135 1084269 logs.go:282] 0 containers: []
	W1213 11:26:49.233144 1084269 logs.go:284] No container was found matching "coredns"
	I1213 11:26:49.233149 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:26:49.233206 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:26:49.315842 1084269 cri.go:89] found id: ""
	I1213 11:26:49.315865 1084269 logs.go:282] 0 containers: []
	W1213 11:26:49.315874 1084269 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:26:49.315880 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:26:49.315941 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:26:49.362421 1084269 cri.go:89] found id: ""
	I1213 11:26:49.362443 1084269 logs.go:282] 0 containers: []
	W1213 11:26:49.362451 1084269 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:26:49.362457 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:26:49.362514 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:26:49.396031 1084269 cri.go:89] found id: ""
	I1213 11:26:49.396059 1084269 logs.go:282] 0 containers: []
	W1213 11:26:49.396067 1084269 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:26:49.396073 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:26:49.396135 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:26:49.432184 1084269 cri.go:89] found id: ""
	I1213 11:26:49.432208 1084269 logs.go:282] 0 containers: []
	W1213 11:26:49.432218 1084269 logs.go:284] No container was found matching "kindnet"
	I1213 11:26:49.432224 1084269 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:26:49.432298 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:26:49.460246 1084269 cri.go:89] found id: ""
	I1213 11:26:49.460268 1084269 logs.go:282] 0 containers: []
	W1213 11:26:49.460276 1084269 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:26:49.460286 1084269 logs.go:123] Gathering logs for kubelet ...
	I1213 11:26:49.460298 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:26:49.529709 1084269 logs.go:123] Gathering logs for dmesg ...
	I1213 11:26:49.529829 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:26:49.546220 1084269 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:26:49.546252 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:26:49.613777 1084269 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:26:49.613798 1084269 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:26:49.613815 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:26:49.645075 1084269 logs.go:123] Gathering logs for container status ...
	I1213 11:26:49.645108 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:26:52.174588 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:26:52.185059 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:26:52.185132 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:26:52.211563 1084269 cri.go:89] found id: ""
	I1213 11:26:52.211589 1084269 logs.go:282] 0 containers: []
	W1213 11:26:52.211599 1084269 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:26:52.211605 1084269 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:26:52.211683 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:26:52.253513 1084269 cri.go:89] found id: ""
	I1213 11:26:52.253584 1084269 logs.go:282] 0 containers: []
	W1213 11:26:52.253613 1084269 logs.go:284] No container was found matching "etcd"
	I1213 11:26:52.253627 1084269 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:26:52.253720 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:26:52.286285 1084269 cri.go:89] found id: ""
	I1213 11:26:52.286315 1084269 logs.go:282] 0 containers: []
	W1213 11:26:52.286325 1084269 logs.go:284] No container was found matching "coredns"
	I1213 11:26:52.286331 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:26:52.286389 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:26:52.319049 1084269 cri.go:89] found id: ""
	I1213 11:26:52.319079 1084269 logs.go:282] 0 containers: []
	W1213 11:26:52.319089 1084269 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:26:52.319095 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:26:52.319152 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:26:52.345086 1084269 cri.go:89] found id: ""
	I1213 11:26:52.345110 1084269 logs.go:282] 0 containers: []
	W1213 11:26:52.345119 1084269 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:26:52.345125 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:26:52.345182 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:26:52.375648 1084269 cri.go:89] found id: ""
	I1213 11:26:52.375717 1084269 logs.go:282] 0 containers: []
	W1213 11:26:52.375740 1084269 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:26:52.375752 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:26:52.375826 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:26:52.402419 1084269 cri.go:89] found id: ""
	I1213 11:26:52.402453 1084269 logs.go:282] 0 containers: []
	W1213 11:26:52.402461 1084269 logs.go:284] No container was found matching "kindnet"
	I1213 11:26:52.402485 1084269 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:26:52.402608 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:26:52.437697 1084269 cri.go:89] found id: ""
	I1213 11:26:52.437731 1084269 logs.go:282] 0 containers: []
	W1213 11:26:52.437740 1084269 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:26:52.437769 1084269 logs.go:123] Gathering logs for kubelet ...
	I1213 11:26:52.437788 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:26:52.512197 1084269 logs.go:123] Gathering logs for dmesg ...
	I1213 11:26:52.512236 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:26:52.529202 1084269 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:26:52.529234 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:26:52.595772 1084269 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:26:52.595839 1084269 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:26:52.595859 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:26:52.627669 1084269 logs.go:123] Gathering logs for container status ...
	I1213 11:26:52.627706 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:26:55.157641 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:26:55.169608 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:26:55.169676 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:26:55.259527 1084269 cri.go:89] found id: ""
	I1213 11:26:55.259556 1084269 logs.go:282] 0 containers: []
	W1213 11:26:55.259564 1084269 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:26:55.259570 1084269 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:26:55.259633 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:26:55.314286 1084269 cri.go:89] found id: ""
	I1213 11:26:55.314311 1084269 logs.go:282] 0 containers: []
	W1213 11:26:55.314321 1084269 logs.go:284] No container was found matching "etcd"
	I1213 11:26:55.314327 1084269 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:26:55.314401 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:26:55.340072 1084269 cri.go:89] found id: ""
	I1213 11:26:55.340099 1084269 logs.go:282] 0 containers: []
	W1213 11:26:55.340108 1084269 logs.go:284] No container was found matching "coredns"
	I1213 11:26:55.340114 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:26:55.340195 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:26:55.366244 1084269 cri.go:89] found id: ""
	I1213 11:26:55.366272 1084269 logs.go:282] 0 containers: []
	W1213 11:26:55.366280 1084269 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:26:55.366287 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:26:55.366344 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:26:55.392883 1084269 cri.go:89] found id: ""
	I1213 11:26:55.392910 1084269 logs.go:282] 0 containers: []
	W1213 11:26:55.392919 1084269 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:26:55.392925 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:26:55.392982 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:26:55.417525 1084269 cri.go:89] found id: ""
	I1213 11:26:55.417570 1084269 logs.go:282] 0 containers: []
	W1213 11:26:55.417579 1084269 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:26:55.417585 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:26:55.417641 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:26:55.447904 1084269 cri.go:89] found id: ""
	I1213 11:26:55.447945 1084269 logs.go:282] 0 containers: []
	W1213 11:26:55.447954 1084269 logs.go:284] No container was found matching "kindnet"
	I1213 11:26:55.447961 1084269 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:26:55.448021 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:26:55.473247 1084269 cri.go:89] found id: ""
	I1213 11:26:55.473274 1084269 logs.go:282] 0 containers: []
	W1213 11:26:55.473284 1084269 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:26:55.473293 1084269 logs.go:123] Gathering logs for container status ...
	I1213 11:26:55.473304 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:26:55.505901 1084269 logs.go:123] Gathering logs for kubelet ...
	I1213 11:26:55.505930 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:26:55.578888 1084269 logs.go:123] Gathering logs for dmesg ...
	I1213 11:26:55.578930 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:26:55.596128 1084269 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:26:55.596155 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:26:55.661309 1084269 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:26:55.661340 1084269 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:26:55.661353 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:26:58.193464 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:26:58.203538 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:26:58.203633 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:26:58.231551 1084269 cri.go:89] found id: ""
	I1213 11:26:58.231574 1084269 logs.go:282] 0 containers: []
	W1213 11:26:58.231583 1084269 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:26:58.231589 1084269 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:26:58.231677 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:26:58.263648 1084269 cri.go:89] found id: ""
	I1213 11:26:58.263670 1084269 logs.go:282] 0 containers: []
	W1213 11:26:58.263679 1084269 logs.go:284] No container was found matching "etcd"
	I1213 11:26:58.263685 1084269 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:26:58.263747 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:26:58.299196 1084269 cri.go:89] found id: ""
	I1213 11:26:58.299218 1084269 logs.go:282] 0 containers: []
	W1213 11:26:58.299231 1084269 logs.go:284] No container was found matching "coredns"
	I1213 11:26:58.299237 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:26:58.299308 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:26:58.324979 1084269 cri.go:89] found id: ""
	I1213 11:26:58.325050 1084269 logs.go:282] 0 containers: []
	W1213 11:26:58.325074 1084269 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:26:58.325094 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:26:58.325187 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:26:58.351094 1084269 cri.go:89] found id: ""
	I1213 11:26:58.351120 1084269 logs.go:282] 0 containers: []
	W1213 11:26:58.351129 1084269 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:26:58.351136 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:26:58.351193 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:26:58.377568 1084269 cri.go:89] found id: ""
	I1213 11:26:58.377596 1084269 logs.go:282] 0 containers: []
	W1213 11:26:58.377605 1084269 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:26:58.377612 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:26:58.377681 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:26:58.408142 1084269 cri.go:89] found id: ""
	I1213 11:26:58.408166 1084269 logs.go:282] 0 containers: []
	W1213 11:26:58.408175 1084269 logs.go:284] No container was found matching "kindnet"
	I1213 11:26:58.408182 1084269 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:26:58.408251 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:26:58.441643 1084269 cri.go:89] found id: ""
	I1213 11:26:58.441678 1084269 logs.go:282] 0 containers: []
	W1213 11:26:58.441689 1084269 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:26:58.441711 1084269 logs.go:123] Gathering logs for container status ...
	I1213 11:26:58.441732 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:26:58.471134 1084269 logs.go:123] Gathering logs for kubelet ...
	I1213 11:26:58.471163 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:26:58.543139 1084269 logs.go:123] Gathering logs for dmesg ...
	I1213 11:26:58.543186 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:26:58.559596 1084269 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:26:58.559627 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:26:58.633489 1084269 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:26:58.633655 1084269 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:26:58.633685 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:27:01.168594 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:27:01.179362 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:27:01.179441 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:27:01.205788 1084269 cri.go:89] found id: ""
	I1213 11:27:01.205814 1084269 logs.go:282] 0 containers: []
	W1213 11:27:01.205823 1084269 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:27:01.205830 1084269 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:27:01.205890 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:27:01.243695 1084269 cri.go:89] found id: ""
	I1213 11:27:01.243719 1084269 logs.go:282] 0 containers: []
	W1213 11:27:01.243728 1084269 logs.go:284] No container was found matching "etcd"
	I1213 11:27:01.243734 1084269 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:27:01.243794 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:27:01.307868 1084269 cri.go:89] found id: ""
	I1213 11:27:01.307897 1084269 logs.go:282] 0 containers: []
	W1213 11:27:01.307907 1084269 logs.go:284] No container was found matching "coredns"
	I1213 11:27:01.307913 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:27:01.307975 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:27:01.341464 1084269 cri.go:89] found id: ""
	I1213 11:27:01.341492 1084269 logs.go:282] 0 containers: []
	W1213 11:27:01.341502 1084269 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:27:01.341508 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:27:01.341596 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:27:01.383170 1084269 cri.go:89] found id: ""
	I1213 11:27:01.383248 1084269 logs.go:282] 0 containers: []
	W1213 11:27:01.383271 1084269 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:27:01.383291 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:27:01.383416 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:27:01.414206 1084269 cri.go:89] found id: ""
	I1213 11:27:01.414281 1084269 logs.go:282] 0 containers: []
	W1213 11:27:01.414318 1084269 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:27:01.414342 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:27:01.414432 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:27:01.444501 1084269 cri.go:89] found id: ""
	I1213 11:27:01.444572 1084269 logs.go:282] 0 containers: []
	W1213 11:27:01.444610 1084269 logs.go:284] No container was found matching "kindnet"
	I1213 11:27:01.444634 1084269 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:27:01.444720 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:27:01.475776 1084269 cri.go:89] found id: ""
	I1213 11:27:01.475838 1084269 logs.go:282] 0 containers: []
	W1213 11:27:01.475871 1084269 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:27:01.475898 1084269 logs.go:123] Gathering logs for kubelet ...
	I1213 11:27:01.475926 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:27:01.557978 1084269 logs.go:123] Gathering logs for dmesg ...
	I1213 11:27:01.558071 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:27:01.578162 1084269 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:27:01.578187 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:27:01.667075 1084269 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:27:01.667092 1084269 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:27:01.667103 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:27:01.702498 1084269 logs.go:123] Gathering logs for container status ...
	I1213 11:27:01.702529 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:27:04.237937 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:27:04.248905 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:27:04.248975 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:27:04.275929 1084269 cri.go:89] found id: ""
	I1213 11:27:04.275950 1084269 logs.go:282] 0 containers: []
	W1213 11:27:04.275959 1084269 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:27:04.275965 1084269 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:27:04.276026 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:27:04.306524 1084269 cri.go:89] found id: ""
	I1213 11:27:04.306552 1084269 logs.go:282] 0 containers: []
	W1213 11:27:04.306560 1084269 logs.go:284] No container was found matching "etcd"
	I1213 11:27:04.306567 1084269 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:27:04.306626 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:27:04.331103 1084269 cri.go:89] found id: ""
	I1213 11:27:04.331130 1084269 logs.go:282] 0 containers: []
	W1213 11:27:04.331139 1084269 logs.go:284] No container was found matching "coredns"
	I1213 11:27:04.331145 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:27:04.331209 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:27:04.363940 1084269 cri.go:89] found id: ""
	I1213 11:27:04.363963 1084269 logs.go:282] 0 containers: []
	W1213 11:27:04.364028 1084269 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:27:04.364039 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:27:04.364128 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:27:04.393690 1084269 cri.go:89] found id: ""
	I1213 11:27:04.393721 1084269 logs.go:282] 0 containers: []
	W1213 11:27:04.393730 1084269 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:27:04.393736 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:27:04.393796 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:27:04.418947 1084269 cri.go:89] found id: ""
	I1213 11:27:04.418974 1084269 logs.go:282] 0 containers: []
	W1213 11:27:04.418983 1084269 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:27:04.418990 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:27:04.419049 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:27:04.447271 1084269 cri.go:89] found id: ""
	I1213 11:27:04.447298 1084269 logs.go:282] 0 containers: []
	W1213 11:27:04.447308 1084269 logs.go:284] No container was found matching "kindnet"
	I1213 11:27:04.447315 1084269 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:27:04.447377 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:27:04.472453 1084269 cri.go:89] found id: ""
	I1213 11:27:04.472481 1084269 logs.go:282] 0 containers: []
	W1213 11:27:04.472490 1084269 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:27:04.472501 1084269 logs.go:123] Gathering logs for kubelet ...
	I1213 11:27:04.472512 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:27:04.538688 1084269 logs.go:123] Gathering logs for dmesg ...
	I1213 11:27:04.538724 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:27:04.555049 1084269 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:27:04.555080 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:27:04.620090 1084269 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:27:04.620111 1084269 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:27:04.620125 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:27:04.651838 1084269 logs.go:123] Gathering logs for container status ...
	I1213 11:27:04.651871 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:27:07.185660 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:27:07.195537 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:27:07.195607 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:27:07.226511 1084269 cri.go:89] found id: ""
	I1213 11:27:07.226532 1084269 logs.go:282] 0 containers: []
	W1213 11:27:07.226541 1084269 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:27:07.226547 1084269 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:27:07.226601 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:27:07.259711 1084269 cri.go:89] found id: ""
	I1213 11:27:07.259735 1084269 logs.go:282] 0 containers: []
	W1213 11:27:07.259744 1084269 logs.go:284] No container was found matching "etcd"
	I1213 11:27:07.259750 1084269 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:27:07.259815 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:27:07.287121 1084269 cri.go:89] found id: ""
	I1213 11:27:07.287144 1084269 logs.go:282] 0 containers: []
	W1213 11:27:07.287161 1084269 logs.go:284] No container was found matching "coredns"
	I1213 11:27:07.287173 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:27:07.287232 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:27:07.322848 1084269 cri.go:89] found id: ""
	I1213 11:27:07.322876 1084269 logs.go:282] 0 containers: []
	W1213 11:27:07.322885 1084269 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:27:07.322891 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:27:07.322952 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:27:07.352188 1084269 cri.go:89] found id: ""
	I1213 11:27:07.352217 1084269 logs.go:282] 0 containers: []
	W1213 11:27:07.352225 1084269 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:27:07.352231 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:27:07.352293 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:27:07.377657 1084269 cri.go:89] found id: ""
	I1213 11:27:07.377682 1084269 logs.go:282] 0 containers: []
	W1213 11:27:07.377691 1084269 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:27:07.377698 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:27:07.377760 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:27:07.402666 1084269 cri.go:89] found id: ""
	I1213 11:27:07.402694 1084269 logs.go:282] 0 containers: []
	W1213 11:27:07.402703 1084269 logs.go:284] No container was found matching "kindnet"
	I1213 11:27:07.402709 1084269 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:27:07.402767 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:27:07.435491 1084269 cri.go:89] found id: ""
	I1213 11:27:07.435517 1084269 logs.go:282] 0 containers: []
	W1213 11:27:07.435526 1084269 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:27:07.435535 1084269 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:27:07.435546 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:27:07.465979 1084269 logs.go:123] Gathering logs for container status ...
	I1213 11:27:07.466013 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:27:07.494689 1084269 logs.go:123] Gathering logs for kubelet ...
	I1213 11:27:07.494719 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:27:07.563318 1084269 logs.go:123] Gathering logs for dmesg ...
	I1213 11:27:07.563356 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:27:07.580503 1084269 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:27:07.580533 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:27:07.644698 1084269 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:27:10.144966 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:27:10.155041 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:27:10.155115 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:27:10.182821 1084269 cri.go:89] found id: ""
	I1213 11:27:10.182846 1084269 logs.go:282] 0 containers: []
	W1213 11:27:10.182855 1084269 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:27:10.182861 1084269 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:27:10.182922 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:27:10.207534 1084269 cri.go:89] found id: ""
	I1213 11:27:10.207557 1084269 logs.go:282] 0 containers: []
	W1213 11:27:10.207566 1084269 logs.go:284] No container was found matching "etcd"
	I1213 11:27:10.207579 1084269 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:27:10.207636 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:27:10.240861 1084269 cri.go:89] found id: ""
	I1213 11:27:10.240884 1084269 logs.go:282] 0 containers: []
	W1213 11:27:10.240892 1084269 logs.go:284] No container was found matching "coredns"
	I1213 11:27:10.240898 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:27:10.240961 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:27:10.277190 1084269 cri.go:89] found id: ""
	I1213 11:27:10.277213 1084269 logs.go:282] 0 containers: []
	W1213 11:27:10.277221 1084269 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:27:10.277228 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:27:10.277287 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:27:10.305633 1084269 cri.go:89] found id: ""
	I1213 11:27:10.305673 1084269 logs.go:282] 0 containers: []
	W1213 11:27:10.305688 1084269 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:27:10.305694 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:27:10.305769 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:27:10.332709 1084269 cri.go:89] found id: ""
	I1213 11:27:10.332743 1084269 logs.go:282] 0 containers: []
	W1213 11:27:10.332768 1084269 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:27:10.332778 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:27:10.332858 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:27:10.359284 1084269 cri.go:89] found id: ""
	I1213 11:27:10.359309 1084269 logs.go:282] 0 containers: []
	W1213 11:27:10.359318 1084269 logs.go:284] No container was found matching "kindnet"
	I1213 11:27:10.359325 1084269 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:27:10.359385 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:27:10.386508 1084269 cri.go:89] found id: ""
	I1213 11:27:10.386531 1084269 logs.go:282] 0 containers: []
	W1213 11:27:10.386540 1084269 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:27:10.386549 1084269 logs.go:123] Gathering logs for container status ...
	I1213 11:27:10.386563 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:27:10.416943 1084269 logs.go:123] Gathering logs for kubelet ...
	I1213 11:27:10.416972 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:27:10.484491 1084269 logs.go:123] Gathering logs for dmesg ...
	I1213 11:27:10.484527 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:27:10.502146 1084269 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:27:10.502178 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:27:10.569637 1084269 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:27:10.569664 1084269 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:27:10.569684 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:27:13.101863 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:27:13.111701 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:27:13.111776 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:27:13.137180 1084269 cri.go:89] found id: ""
	I1213 11:27:13.137210 1084269 logs.go:282] 0 containers: []
	W1213 11:27:13.137220 1084269 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:27:13.137226 1084269 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:27:13.137289 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:27:13.163004 1084269 cri.go:89] found id: ""
	I1213 11:27:13.163030 1084269 logs.go:282] 0 containers: []
	W1213 11:27:13.163039 1084269 logs.go:284] No container was found matching "etcd"
	I1213 11:27:13.163046 1084269 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:27:13.163104 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:27:13.190050 1084269 cri.go:89] found id: ""
	I1213 11:27:13.190073 1084269 logs.go:282] 0 containers: []
	W1213 11:27:13.190082 1084269 logs.go:284] No container was found matching "coredns"
	I1213 11:27:13.190088 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:27:13.190152 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:27:13.216634 1084269 cri.go:89] found id: ""
	I1213 11:27:13.216658 1084269 logs.go:282] 0 containers: []
	W1213 11:27:13.216667 1084269 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:27:13.216673 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:27:13.216729 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:27:13.243643 1084269 cri.go:89] found id: ""
	I1213 11:27:13.243671 1084269 logs.go:282] 0 containers: []
	W1213 11:27:13.243679 1084269 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:27:13.243685 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:27:13.243804 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:27:13.279474 1084269 cri.go:89] found id: ""
	I1213 11:27:13.279503 1084269 logs.go:282] 0 containers: []
	W1213 11:27:13.279518 1084269 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:27:13.279524 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:27:13.279580 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:27:13.306248 1084269 cri.go:89] found id: ""
	I1213 11:27:13.306271 1084269 logs.go:282] 0 containers: []
	W1213 11:27:13.306281 1084269 logs.go:284] No container was found matching "kindnet"
	I1213 11:27:13.306287 1084269 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:27:13.306342 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:27:13.334196 1084269 cri.go:89] found id: ""
	I1213 11:27:13.334233 1084269 logs.go:282] 0 containers: []
	W1213 11:27:13.334242 1084269 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:27:13.334267 1084269 logs.go:123] Gathering logs for kubelet ...
	I1213 11:27:13.334286 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:27:13.400890 1084269 logs.go:123] Gathering logs for dmesg ...
	I1213 11:27:13.400926 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:27:13.417185 1084269 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:27:13.417213 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:27:13.487150 1084269 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:27:13.487220 1084269 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:27:13.487249 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:27:13.517885 1084269 logs.go:123] Gathering logs for container status ...
	I1213 11:27:13.517920 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:27:16.048224 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:27:16.059318 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:27:16.059394 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:27:16.086479 1084269 cri.go:89] found id: ""
	I1213 11:27:16.086504 1084269 logs.go:282] 0 containers: []
	W1213 11:27:16.086513 1084269 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:27:16.086519 1084269 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:27:16.086593 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:27:16.116594 1084269 cri.go:89] found id: ""
	I1213 11:27:16.116617 1084269 logs.go:282] 0 containers: []
	W1213 11:27:16.116626 1084269 logs.go:284] No container was found matching "etcd"
	I1213 11:27:16.116633 1084269 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:27:16.116697 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:27:16.144461 1084269 cri.go:89] found id: ""
	I1213 11:27:16.144492 1084269 logs.go:282] 0 containers: []
	W1213 11:27:16.144500 1084269 logs.go:284] No container was found matching "coredns"
	I1213 11:27:16.144505 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:27:16.144561 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:27:16.171800 1084269 cri.go:89] found id: ""
	I1213 11:27:16.171827 1084269 logs.go:282] 0 containers: []
	W1213 11:27:16.171837 1084269 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:27:16.171843 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:27:16.171900 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:27:16.197422 1084269 cri.go:89] found id: ""
	I1213 11:27:16.197451 1084269 logs.go:282] 0 containers: []
	W1213 11:27:16.197460 1084269 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:27:16.197466 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:27:16.197525 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:27:16.224854 1084269 cri.go:89] found id: ""
	I1213 11:27:16.224882 1084269 logs.go:282] 0 containers: []
	W1213 11:27:16.224893 1084269 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:27:16.224900 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:27:16.224965 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:27:16.255027 1084269 cri.go:89] found id: ""
	I1213 11:27:16.255055 1084269 logs.go:282] 0 containers: []
	W1213 11:27:16.255064 1084269 logs.go:284] No container was found matching "kindnet"
	I1213 11:27:16.255070 1084269 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:27:16.255135 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:27:16.291813 1084269 cri.go:89] found id: ""
	I1213 11:27:16.291839 1084269 logs.go:282] 0 containers: []
	W1213 11:27:16.291848 1084269 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:27:16.291864 1084269 logs.go:123] Gathering logs for dmesg ...
	I1213 11:27:16.291879 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:27:16.308180 1084269 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:27:16.308211 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:27:16.371609 1084269 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:27:16.371627 1084269 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:27:16.371640 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:27:16.402820 1084269 logs.go:123] Gathering logs for container status ...
	I1213 11:27:16.402858 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:27:16.436046 1084269 logs.go:123] Gathering logs for kubelet ...
	I1213 11:27:16.436077 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:27:19.014104 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:27:19.026346 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:27:19.026429 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:27:19.068463 1084269 cri.go:89] found id: ""
	I1213 11:27:19.068491 1084269 logs.go:282] 0 containers: []
	W1213 11:27:19.068500 1084269 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:27:19.068547 1084269 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:27:19.068612 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:27:19.095082 1084269 cri.go:89] found id: ""
	I1213 11:27:19.095107 1084269 logs.go:282] 0 containers: []
	W1213 11:27:19.095115 1084269 logs.go:284] No container was found matching "etcd"
	I1213 11:27:19.095122 1084269 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:27:19.095179 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:27:19.120945 1084269 cri.go:89] found id: ""
	I1213 11:27:19.120966 1084269 logs.go:282] 0 containers: []
	W1213 11:27:19.120974 1084269 logs.go:284] No container was found matching "coredns"
	I1213 11:27:19.120980 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:27:19.121044 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:27:19.145878 1084269 cri.go:89] found id: ""
	I1213 11:27:19.145907 1084269 logs.go:282] 0 containers: []
	W1213 11:27:19.145917 1084269 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:27:19.145923 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:27:19.145988 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:27:19.180254 1084269 cri.go:89] found id: ""
	I1213 11:27:19.180278 1084269 logs.go:282] 0 containers: []
	W1213 11:27:19.180286 1084269 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:27:19.180293 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:27:19.180350 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:27:19.206033 1084269 cri.go:89] found id: ""
	I1213 11:27:19.206056 1084269 logs.go:282] 0 containers: []
	W1213 11:27:19.206064 1084269 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:27:19.206070 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:27:19.206131 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:27:19.239086 1084269 cri.go:89] found id: ""
	I1213 11:27:19.239108 1084269 logs.go:282] 0 containers: []
	W1213 11:27:19.239116 1084269 logs.go:284] No container was found matching "kindnet"
	I1213 11:27:19.239121 1084269 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:27:19.239184 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:27:19.273142 1084269 cri.go:89] found id: ""
	I1213 11:27:19.273164 1084269 logs.go:282] 0 containers: []
	W1213 11:27:19.273173 1084269 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:27:19.273182 1084269 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:27:19.273193 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:27:19.335546 1084269 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:27:19.335564 1084269 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:27:19.335577 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:27:19.366013 1084269 logs.go:123] Gathering logs for container status ...
	I1213 11:27:19.366049 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:27:19.395181 1084269 logs.go:123] Gathering logs for kubelet ...
	I1213 11:27:19.395219 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:27:19.467952 1084269 logs.go:123] Gathering logs for dmesg ...
	I1213 11:27:19.467989 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:27:21.984673 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:27:22.002194 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:27:22.002286 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:27:22.033375 1084269 cri.go:89] found id: ""
	I1213 11:27:22.033402 1084269 logs.go:282] 0 containers: []
	W1213 11:27:22.033412 1084269 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:27:22.033418 1084269 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:27:22.033477 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:27:22.068358 1084269 cri.go:89] found id: ""
	I1213 11:27:22.068386 1084269 logs.go:282] 0 containers: []
	W1213 11:27:22.068396 1084269 logs.go:284] No container was found matching "etcd"
	I1213 11:27:22.068402 1084269 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:27:22.068463 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:27:22.102230 1084269 cri.go:89] found id: ""
	I1213 11:27:22.102257 1084269 logs.go:282] 0 containers: []
	W1213 11:27:22.102266 1084269 logs.go:284] No container was found matching "coredns"
	I1213 11:27:22.102272 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:27:22.102338 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:27:22.148222 1084269 cri.go:89] found id: ""
	I1213 11:27:22.148250 1084269 logs.go:282] 0 containers: []
	W1213 11:27:22.148260 1084269 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:27:22.148267 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:27:22.148328 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:27:22.187827 1084269 cri.go:89] found id: ""
	I1213 11:27:22.187856 1084269 logs.go:282] 0 containers: []
	W1213 11:27:22.187865 1084269 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:27:22.187872 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:27:22.187987 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:27:22.249518 1084269 cri.go:89] found id: ""
	I1213 11:27:22.249565 1084269 logs.go:282] 0 containers: []
	W1213 11:27:22.249574 1084269 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:27:22.249581 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:27:22.249641 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:27:22.319897 1084269 cri.go:89] found id: ""
	I1213 11:27:22.319920 1084269 logs.go:282] 0 containers: []
	W1213 11:27:22.319928 1084269 logs.go:284] No container was found matching "kindnet"
	I1213 11:27:22.319934 1084269 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:27:22.319999 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:27:22.355848 1084269 cri.go:89] found id: ""
	I1213 11:27:22.355869 1084269 logs.go:282] 0 containers: []
	W1213 11:27:22.355877 1084269 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:27:22.355887 1084269 logs.go:123] Gathering logs for dmesg ...
	I1213 11:27:22.355899 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:27:22.381412 1084269 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:27:22.381495 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:27:22.472138 1084269 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:27:22.472158 1084269 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:27:22.472174 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:27:22.505834 1084269 logs.go:123] Gathering logs for container status ...
	I1213 11:27:22.505871 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:27:22.536391 1084269 logs.go:123] Gathering logs for kubelet ...
	I1213 11:27:22.536422 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:27:25.106486 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:27:25.116943 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:27:25.117014 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:27:25.154506 1084269 cri.go:89] found id: ""
	I1213 11:27:25.154534 1084269 logs.go:282] 0 containers: []
	W1213 11:27:25.154543 1084269 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:27:25.154550 1084269 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:27:25.154611 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:27:25.191640 1084269 cri.go:89] found id: ""
	I1213 11:27:25.191670 1084269 logs.go:282] 0 containers: []
	W1213 11:27:25.191679 1084269 logs.go:284] No container was found matching "etcd"
	I1213 11:27:25.191685 1084269 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:27:25.191745 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:27:25.240095 1084269 cri.go:89] found id: ""
	I1213 11:27:25.240124 1084269 logs.go:282] 0 containers: []
	W1213 11:27:25.240133 1084269 logs.go:284] No container was found matching "coredns"
	I1213 11:27:25.240140 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:27:25.240200 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:27:25.286043 1084269 cri.go:89] found id: ""
	I1213 11:27:25.286072 1084269 logs.go:282] 0 containers: []
	W1213 11:27:25.286080 1084269 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:27:25.286087 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:27:25.286146 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:27:25.314922 1084269 cri.go:89] found id: ""
	I1213 11:27:25.314966 1084269 logs.go:282] 0 containers: []
	W1213 11:27:25.314975 1084269 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:27:25.314981 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:27:25.315085 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:27:25.365702 1084269 cri.go:89] found id: ""
	I1213 11:27:25.365732 1084269 logs.go:282] 0 containers: []
	W1213 11:27:25.365741 1084269 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:27:25.365748 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:27:25.365805 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:27:25.396289 1084269 cri.go:89] found id: ""
	I1213 11:27:25.396320 1084269 logs.go:282] 0 containers: []
	W1213 11:27:25.396329 1084269 logs.go:284] No container was found matching "kindnet"
	I1213 11:27:25.396335 1084269 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:27:25.396392 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:27:25.432184 1084269 cri.go:89] found id: ""
	I1213 11:27:25.432211 1084269 logs.go:282] 0 containers: []
	W1213 11:27:25.432221 1084269 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:27:25.432231 1084269 logs.go:123] Gathering logs for kubelet ...
	I1213 11:27:25.432243 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:27:25.509092 1084269 logs.go:123] Gathering logs for dmesg ...
	I1213 11:27:25.509198 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:27:25.530534 1084269 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:27:25.530610 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:27:25.621968 1084269 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:27:25.621985 1084269 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:27:25.621998 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:27:25.664345 1084269 logs.go:123] Gathering logs for container status ...
	I1213 11:27:25.664422 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:27:28.208495 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:27:28.218649 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:27:28.218722 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:27:28.258967 1084269 cri.go:89] found id: ""
	I1213 11:27:28.258991 1084269 logs.go:282] 0 containers: []
	W1213 11:27:28.258999 1084269 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:27:28.259005 1084269 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:27:28.259064 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:27:28.295076 1084269 cri.go:89] found id: ""
	I1213 11:27:28.295110 1084269 logs.go:282] 0 containers: []
	W1213 11:27:28.295120 1084269 logs.go:284] No container was found matching "etcd"
	I1213 11:27:28.295126 1084269 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:27:28.295190 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:27:28.320582 1084269 cri.go:89] found id: ""
	I1213 11:27:28.320606 1084269 logs.go:282] 0 containers: []
	W1213 11:27:28.320614 1084269 logs.go:284] No container was found matching "coredns"
	I1213 11:27:28.320620 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:27:28.320680 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:27:28.347135 1084269 cri.go:89] found id: ""
	I1213 11:27:28.347159 1084269 logs.go:282] 0 containers: []
	W1213 11:27:28.347168 1084269 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:27:28.347175 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:27:28.347233 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:27:28.374376 1084269 cri.go:89] found id: ""
	I1213 11:27:28.374399 1084269 logs.go:282] 0 containers: []
	W1213 11:27:28.374408 1084269 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:27:28.374414 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:27:28.374473 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:27:28.404128 1084269 cri.go:89] found id: ""
	I1213 11:27:28.404154 1084269 logs.go:282] 0 containers: []
	W1213 11:27:28.404162 1084269 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:27:28.404168 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:27:28.404228 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:27:28.438841 1084269 cri.go:89] found id: ""
	I1213 11:27:28.438864 1084269 logs.go:282] 0 containers: []
	W1213 11:27:28.438872 1084269 logs.go:284] No container was found matching "kindnet"
	I1213 11:27:28.438878 1084269 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:27:28.438935 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:27:28.464651 1084269 cri.go:89] found id: ""
	I1213 11:27:28.464674 1084269 logs.go:282] 0 containers: []
	W1213 11:27:28.464682 1084269 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:27:28.464691 1084269 logs.go:123] Gathering logs for kubelet ...
	I1213 11:27:28.464703 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:27:28.534724 1084269 logs.go:123] Gathering logs for dmesg ...
	I1213 11:27:28.534762 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:27:28.553641 1084269 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:27:28.553669 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:27:28.641066 1084269 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:27:28.641084 1084269 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:27:28.641097 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:27:28.676914 1084269 logs.go:123] Gathering logs for container status ...
	I1213 11:27:28.676957 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:27:31.219208 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:27:31.230027 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:27:31.230106 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:27:31.261634 1084269 cri.go:89] found id: ""
	I1213 11:27:31.261665 1084269 logs.go:282] 0 containers: []
	W1213 11:27:31.261674 1084269 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:27:31.261682 1084269 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:27:31.261743 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:27:31.292505 1084269 cri.go:89] found id: ""
	I1213 11:27:31.292527 1084269 logs.go:282] 0 containers: []
	W1213 11:27:31.292535 1084269 logs.go:284] No container was found matching "etcd"
	I1213 11:27:31.292541 1084269 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:27:31.292598 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:27:31.317500 1084269 cri.go:89] found id: ""
	I1213 11:27:31.317523 1084269 logs.go:282] 0 containers: []
	W1213 11:27:31.317562 1084269 logs.go:284] No container was found matching "coredns"
	I1213 11:27:31.317570 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:27:31.317625 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:27:31.344596 1084269 cri.go:89] found id: ""
	I1213 11:27:31.344619 1084269 logs.go:282] 0 containers: []
	W1213 11:27:31.344629 1084269 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:27:31.344634 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:27:31.344696 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:27:31.370178 1084269 cri.go:89] found id: ""
	I1213 11:27:31.370204 1084269 logs.go:282] 0 containers: []
	W1213 11:27:31.370213 1084269 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:27:31.370219 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:27:31.370276 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:27:31.394445 1084269 cri.go:89] found id: ""
	I1213 11:27:31.394467 1084269 logs.go:282] 0 containers: []
	W1213 11:27:31.394476 1084269 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:27:31.394483 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:27:31.394579 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:27:31.419015 1084269 cri.go:89] found id: ""
	I1213 11:27:31.419041 1084269 logs.go:282] 0 containers: []
	W1213 11:27:31.419050 1084269 logs.go:284] No container was found matching "kindnet"
	I1213 11:27:31.419056 1084269 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:27:31.419145 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:27:31.447373 1084269 cri.go:89] found id: ""
	I1213 11:27:31.447398 1084269 logs.go:282] 0 containers: []
	W1213 11:27:31.447407 1084269 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:27:31.447417 1084269 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:27:31.447428 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:27:31.508745 1084269 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:27:31.508766 1084269 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:27:31.508779 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:27:31.539396 1084269 logs.go:123] Gathering logs for container status ...
	I1213 11:27:31.539431 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:27:31.572703 1084269 logs.go:123] Gathering logs for kubelet ...
	I1213 11:27:31.572732 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:27:31.639135 1084269 logs.go:123] Gathering logs for dmesg ...
	I1213 11:27:31.639174 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:27:34.155325 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:27:34.165321 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:27:34.165391 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:27:34.190274 1084269 cri.go:89] found id: ""
	I1213 11:27:34.190306 1084269 logs.go:282] 0 containers: []
	W1213 11:27:34.190316 1084269 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:27:34.190323 1084269 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:27:34.190379 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:27:34.215681 1084269 cri.go:89] found id: ""
	I1213 11:27:34.215702 1084269 logs.go:282] 0 containers: []
	W1213 11:27:34.215711 1084269 logs.go:284] No container was found matching "etcd"
	I1213 11:27:34.215718 1084269 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:27:34.215782 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:27:34.244074 1084269 cri.go:89] found id: ""
	I1213 11:27:34.244099 1084269 logs.go:282] 0 containers: []
	W1213 11:27:34.244108 1084269 logs.go:284] No container was found matching "coredns"
	I1213 11:27:34.244115 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:27:34.244171 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:27:34.272538 1084269 cri.go:89] found id: ""
	I1213 11:27:34.272563 1084269 logs.go:282] 0 containers: []
	W1213 11:27:34.272573 1084269 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:27:34.272579 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:27:34.272637 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:27:34.299890 1084269 cri.go:89] found id: ""
	I1213 11:27:34.299917 1084269 logs.go:282] 0 containers: []
	W1213 11:27:34.299926 1084269 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:27:34.299933 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:27:34.299993 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:27:34.324988 1084269 cri.go:89] found id: ""
	I1213 11:27:34.325016 1084269 logs.go:282] 0 containers: []
	W1213 11:27:34.325028 1084269 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:27:34.325036 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:27:34.325103 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:27:34.352009 1084269 cri.go:89] found id: ""
	I1213 11:27:34.352041 1084269 logs.go:282] 0 containers: []
	W1213 11:27:34.352050 1084269 logs.go:284] No container was found matching "kindnet"
	I1213 11:27:34.352057 1084269 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:27:34.352163 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:27:34.382189 1084269 cri.go:89] found id: ""
	I1213 11:27:34.382215 1084269 logs.go:282] 0 containers: []
	W1213 11:27:34.382224 1084269 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:27:34.382234 1084269 logs.go:123] Gathering logs for kubelet ...
	I1213 11:27:34.382248 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:27:34.447724 1084269 logs.go:123] Gathering logs for dmesg ...
	I1213 11:27:34.447765 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:27:34.464282 1084269 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:27:34.464315 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:27:34.528325 1084269 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:27:34.528348 1084269 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:27:34.528361 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:27:34.559709 1084269 logs.go:123] Gathering logs for container status ...
	I1213 11:27:34.559745 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:27:37.091096 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:27:37.101393 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:27:37.101465 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:27:37.127377 1084269 cri.go:89] found id: ""
	I1213 11:27:37.127401 1084269 logs.go:282] 0 containers: []
	W1213 11:27:37.127411 1084269 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:27:37.127417 1084269 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:27:37.127474 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:27:37.153208 1084269 cri.go:89] found id: ""
	I1213 11:27:37.153234 1084269 logs.go:282] 0 containers: []
	W1213 11:27:37.153244 1084269 logs.go:284] No container was found matching "etcd"
	I1213 11:27:37.153250 1084269 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:27:37.153307 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:27:37.178845 1084269 cri.go:89] found id: ""
	I1213 11:27:37.178870 1084269 logs.go:282] 0 containers: []
	W1213 11:27:37.178879 1084269 logs.go:284] No container was found matching "coredns"
	I1213 11:27:37.178886 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:27:37.178943 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:27:37.203998 1084269 cri.go:89] found id: ""
	I1213 11:27:37.204021 1084269 logs.go:282] 0 containers: []
	W1213 11:27:37.204030 1084269 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:27:37.204036 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:27:37.204093 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:27:37.238510 1084269 cri.go:89] found id: ""
	I1213 11:27:37.238532 1084269 logs.go:282] 0 containers: []
	W1213 11:27:37.238541 1084269 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:27:37.238547 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:27:37.238614 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:27:37.271515 1084269 cri.go:89] found id: ""
	I1213 11:27:37.271547 1084269 logs.go:282] 0 containers: []
	W1213 11:27:37.271555 1084269 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:27:37.271562 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:27:37.271625 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:27:37.298223 1084269 cri.go:89] found id: ""
	I1213 11:27:37.298246 1084269 logs.go:282] 0 containers: []
	W1213 11:27:37.298254 1084269 logs.go:284] No container was found matching "kindnet"
	I1213 11:27:37.298260 1084269 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:27:37.298330 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:27:37.322675 1084269 cri.go:89] found id: ""
	I1213 11:27:37.322703 1084269 logs.go:282] 0 containers: []
	W1213 11:27:37.322713 1084269 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:27:37.322722 1084269 logs.go:123] Gathering logs for kubelet ...
	I1213 11:27:37.322733 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:27:37.388969 1084269 logs.go:123] Gathering logs for dmesg ...
	I1213 11:27:37.389006 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:27:37.405625 1084269 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:27:37.405655 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:27:37.472384 1084269 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:27:37.472405 1084269 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:27:37.472417 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:27:37.503632 1084269 logs.go:123] Gathering logs for container status ...
	I1213 11:27:37.503668 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:27:40.034132 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:27:40.046244 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:27:40.046320 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:27:40.074473 1084269 cri.go:89] found id: ""
	I1213 11:27:40.074499 1084269 logs.go:282] 0 containers: []
	W1213 11:27:40.074509 1084269 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:27:40.074516 1084269 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:27:40.074578 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:27:40.103474 1084269 cri.go:89] found id: ""
	I1213 11:27:40.103501 1084269 logs.go:282] 0 containers: []
	W1213 11:27:40.103511 1084269 logs.go:284] No container was found matching "etcd"
	I1213 11:27:40.103518 1084269 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:27:40.103579 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:27:40.133248 1084269 cri.go:89] found id: ""
	I1213 11:27:40.133288 1084269 logs.go:282] 0 containers: []
	W1213 11:27:40.133301 1084269 logs.go:284] No container was found matching "coredns"
	I1213 11:27:40.133311 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:27:40.133393 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:27:40.165326 1084269 cri.go:89] found id: ""
	I1213 11:27:40.165356 1084269 logs.go:282] 0 containers: []
	W1213 11:27:40.165365 1084269 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:27:40.165377 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:27:40.165436 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:27:40.198452 1084269 cri.go:89] found id: ""
	I1213 11:27:40.198480 1084269 logs.go:282] 0 containers: []
	W1213 11:27:40.198488 1084269 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:27:40.198494 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:27:40.198560 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:27:40.231941 1084269 cri.go:89] found id: ""
	I1213 11:27:40.231980 1084269 logs.go:282] 0 containers: []
	W1213 11:27:40.232004 1084269 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:27:40.232018 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:27:40.232094 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:27:40.267139 1084269 cri.go:89] found id: ""
	I1213 11:27:40.267168 1084269 logs.go:282] 0 containers: []
	W1213 11:27:40.267176 1084269 logs.go:284] No container was found matching "kindnet"
	I1213 11:27:40.267183 1084269 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:27:40.267245 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:27:40.307253 1084269 cri.go:89] found id: ""
	I1213 11:27:40.307278 1084269 logs.go:282] 0 containers: []
	W1213 11:27:40.307288 1084269 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:27:40.307297 1084269 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:27:40.307308 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:27:40.338652 1084269 logs.go:123] Gathering logs for container status ...
	I1213 11:27:40.338686 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:27:40.370577 1084269 logs.go:123] Gathering logs for kubelet ...
	I1213 11:27:40.370605 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:27:40.443236 1084269 logs.go:123] Gathering logs for dmesg ...
	I1213 11:27:40.443286 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:27:40.460621 1084269 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:27:40.460652 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:27:40.560395 1084269 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:27:43.060844 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:27:43.071144 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:27:43.071213 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:27:43.095992 1084269 cri.go:89] found id: ""
	I1213 11:27:43.096020 1084269 logs.go:282] 0 containers: []
	W1213 11:27:43.096029 1084269 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:27:43.096036 1084269 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:27:43.096106 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:27:43.120840 1084269 cri.go:89] found id: ""
	I1213 11:27:43.120870 1084269 logs.go:282] 0 containers: []
	W1213 11:27:43.120879 1084269 logs.go:284] No container was found matching "etcd"
	I1213 11:27:43.120886 1084269 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:27:43.120945 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:27:43.145816 1084269 cri.go:89] found id: ""
	I1213 11:27:43.145842 1084269 logs.go:282] 0 containers: []
	W1213 11:27:43.145852 1084269 logs.go:284] No container was found matching "coredns"
	I1213 11:27:43.145858 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:27:43.145918 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:27:43.172793 1084269 cri.go:89] found id: ""
	I1213 11:27:43.172821 1084269 logs.go:282] 0 containers: []
	W1213 11:27:43.172830 1084269 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:27:43.172836 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:27:43.172896 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:27:43.200358 1084269 cri.go:89] found id: ""
	I1213 11:27:43.200381 1084269 logs.go:282] 0 containers: []
	W1213 11:27:43.200390 1084269 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:27:43.200397 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:27:43.200454 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:27:43.227414 1084269 cri.go:89] found id: ""
	I1213 11:27:43.227441 1084269 logs.go:282] 0 containers: []
	W1213 11:27:43.227450 1084269 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:27:43.227456 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:27:43.227514 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:27:43.257166 1084269 cri.go:89] found id: ""
	I1213 11:27:43.257193 1084269 logs.go:282] 0 containers: []
	W1213 11:27:43.257202 1084269 logs.go:284] No container was found matching "kindnet"
	I1213 11:27:43.257208 1084269 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:27:43.257266 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:27:43.286357 1084269 cri.go:89] found id: ""
	I1213 11:27:43.286384 1084269 logs.go:282] 0 containers: []
	W1213 11:27:43.286394 1084269 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:27:43.286403 1084269 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:27:43.286415 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:27:43.348039 1084269 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:27:43.348057 1084269 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:27:43.348069 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:27:43.379984 1084269 logs.go:123] Gathering logs for container status ...
	I1213 11:27:43.380019 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:27:43.408094 1084269 logs.go:123] Gathering logs for kubelet ...
	I1213 11:27:43.408125 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:27:43.480364 1084269 logs.go:123] Gathering logs for dmesg ...
	I1213 11:27:43.480399 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:27:45.997848 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:27:46.009454 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:27:46.009526 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:27:46.042371 1084269 cri.go:89] found id: ""
	I1213 11:27:46.042396 1084269 logs.go:282] 0 containers: []
	W1213 11:27:46.042404 1084269 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:27:46.042411 1084269 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:27:46.042474 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:27:46.072635 1084269 cri.go:89] found id: ""
	I1213 11:27:46.072661 1084269 logs.go:282] 0 containers: []
	W1213 11:27:46.072676 1084269 logs.go:284] No container was found matching "etcd"
	I1213 11:27:46.072682 1084269 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:27:46.072741 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:27:46.100851 1084269 cri.go:89] found id: ""
	I1213 11:27:46.100878 1084269 logs.go:282] 0 containers: []
	W1213 11:27:46.100887 1084269 logs.go:284] No container was found matching "coredns"
	I1213 11:27:46.100894 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:27:46.100952 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:27:46.129072 1084269 cri.go:89] found id: ""
	I1213 11:27:46.129099 1084269 logs.go:282] 0 containers: []
	W1213 11:27:46.129109 1084269 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:27:46.129115 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:27:46.129174 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:27:46.154997 1084269 cri.go:89] found id: ""
	I1213 11:27:46.155074 1084269 logs.go:282] 0 containers: []
	W1213 11:27:46.155085 1084269 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:27:46.155092 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:27:46.155153 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:27:46.181413 1084269 cri.go:89] found id: ""
	I1213 11:27:46.181435 1084269 logs.go:282] 0 containers: []
	W1213 11:27:46.181443 1084269 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:27:46.181449 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:27:46.181513 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:27:46.206579 1084269 cri.go:89] found id: ""
	I1213 11:27:46.206656 1084269 logs.go:282] 0 containers: []
	W1213 11:27:46.206680 1084269 logs.go:284] No container was found matching "kindnet"
	I1213 11:27:46.206700 1084269 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:27:46.206796 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:27:46.242675 1084269 cri.go:89] found id: ""
	I1213 11:27:46.242702 1084269 logs.go:282] 0 containers: []
	W1213 11:27:46.242711 1084269 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:27:46.242720 1084269 logs.go:123] Gathering logs for dmesg ...
	I1213 11:27:46.242731 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:27:46.259335 1084269 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:27:46.259365 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:27:46.336865 1084269 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:27:46.336927 1084269 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:27:46.336958 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:27:46.368780 1084269 logs.go:123] Gathering logs for container status ...
	I1213 11:27:46.368821 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:27:46.398163 1084269 logs.go:123] Gathering logs for kubelet ...
	I1213 11:27:46.398192 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:27:48.965181 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:27:48.975408 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:27:48.975474 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:27:49.001111 1084269 cri.go:89] found id: ""
	I1213 11:27:49.001137 1084269 logs.go:282] 0 containers: []
	W1213 11:27:49.001146 1084269 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:27:49.001153 1084269 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:27:49.001224 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:27:49.026720 1084269 cri.go:89] found id: ""
	I1213 11:27:49.026746 1084269 logs.go:282] 0 containers: []
	W1213 11:27:49.026755 1084269 logs.go:284] No container was found matching "etcd"
	I1213 11:27:49.026761 1084269 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:27:49.026819 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:27:49.051325 1084269 cri.go:89] found id: ""
	I1213 11:27:49.051352 1084269 logs.go:282] 0 containers: []
	W1213 11:27:49.051362 1084269 logs.go:284] No container was found matching "coredns"
	I1213 11:27:49.051369 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:27:49.051428 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:27:49.076797 1084269 cri.go:89] found id: ""
	I1213 11:27:49.076820 1084269 logs.go:282] 0 containers: []
	W1213 11:27:49.076830 1084269 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:27:49.076836 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:27:49.076895 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:27:49.102072 1084269 cri.go:89] found id: ""
	I1213 11:27:49.102095 1084269 logs.go:282] 0 containers: []
	W1213 11:27:49.102103 1084269 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:27:49.102109 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:27:49.102165 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:27:49.131592 1084269 cri.go:89] found id: ""
	I1213 11:27:49.131615 1084269 logs.go:282] 0 containers: []
	W1213 11:27:49.131625 1084269 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:27:49.131632 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:27:49.131690 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:27:49.158105 1084269 cri.go:89] found id: ""
	I1213 11:27:49.158133 1084269 logs.go:282] 0 containers: []
	W1213 11:27:49.158142 1084269 logs.go:284] No container was found matching "kindnet"
	I1213 11:27:49.158148 1084269 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:27:49.158219 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:27:49.183302 1084269 cri.go:89] found id: ""
	I1213 11:27:49.183326 1084269 logs.go:282] 0 containers: []
	W1213 11:27:49.183335 1084269 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:27:49.183344 1084269 logs.go:123] Gathering logs for kubelet ...
	I1213 11:27:49.183355 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:27:49.251855 1084269 logs.go:123] Gathering logs for dmesg ...
	I1213 11:27:49.251937 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:27:49.269441 1084269 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:27:49.269521 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:27:49.334925 1084269 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:27:49.334945 1084269 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:27:49.334959 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:27:49.365674 1084269 logs.go:123] Gathering logs for container status ...
	I1213 11:27:49.365709 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:27:51.898747 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:27:51.908790 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:27:51.908863 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:27:51.936230 1084269 cri.go:89] found id: ""
	I1213 11:27:51.936253 1084269 logs.go:282] 0 containers: []
	W1213 11:27:51.936262 1084269 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:27:51.936268 1084269 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:27:51.936367 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:27:51.961721 1084269 cri.go:89] found id: ""
	I1213 11:27:51.961752 1084269 logs.go:282] 0 containers: []
	W1213 11:27:51.961761 1084269 logs.go:284] No container was found matching "etcd"
	I1213 11:27:51.961766 1084269 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:27:51.961825 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:27:51.987003 1084269 cri.go:89] found id: ""
	I1213 11:27:51.987082 1084269 logs.go:282] 0 containers: []
	W1213 11:27:51.987104 1084269 logs.go:284] No container was found matching "coredns"
	I1213 11:27:51.987124 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:27:51.987215 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:27:52.016887 1084269 cri.go:89] found id: ""
	I1213 11:27:52.016914 1084269 logs.go:282] 0 containers: []
	W1213 11:27:52.016923 1084269 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:27:52.016929 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:27:52.016995 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:27:52.042978 1084269 cri.go:89] found id: ""
	I1213 11:27:52.043060 1084269 logs.go:282] 0 containers: []
	W1213 11:27:52.043077 1084269 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:27:52.043084 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:27:52.043155 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:27:52.069984 1084269 cri.go:89] found id: ""
	I1213 11:27:52.070055 1084269 logs.go:282] 0 containers: []
	W1213 11:27:52.070078 1084269 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:27:52.070090 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:27:52.070165 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:27:52.096159 1084269 cri.go:89] found id: ""
	I1213 11:27:52.096189 1084269 logs.go:282] 0 containers: []
	W1213 11:27:52.096198 1084269 logs.go:284] No container was found matching "kindnet"
	I1213 11:27:52.096204 1084269 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:27:52.096267 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:27:52.122335 1084269 cri.go:89] found id: ""
	I1213 11:27:52.122360 1084269 logs.go:282] 0 containers: []
	W1213 11:27:52.122369 1084269 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:27:52.122406 1084269 logs.go:123] Gathering logs for kubelet ...
	I1213 11:27:52.122426 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:27:52.190185 1084269 logs.go:123] Gathering logs for dmesg ...
	I1213 11:27:52.190221 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:27:52.206574 1084269 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:27:52.206605 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:27:52.305974 1084269 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:27:52.306050 1084269 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:27:52.306069 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:27:52.336989 1084269 logs.go:123] Gathering logs for container status ...
	I1213 11:27:52.337024 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:27:54.866248 1084269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:27:54.879318 1084269 kubeadm.go:602] duration metric: took 4m4.068535582s to restartPrimaryControlPlane
	W1213 11:27:54.879388 1084269 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1213 11:27:54.879458 1084269 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1213 11:27:55.360218 1084269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 11:27:55.373358 1084269 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 11:27:55.381866 1084269 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 11:27:55.381933 1084269 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 11:27:55.390353 1084269 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 11:27:55.390376 1084269 kubeadm.go:158] found existing configuration files:
	
	I1213 11:27:55.390428 1084269 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 11:27:55.398869 1084269 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 11:27:55.398935 1084269 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 11:27:55.406139 1084269 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 11:27:55.413604 1084269 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 11:27:55.413696 1084269 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 11:27:55.421100 1084269 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 11:27:55.432239 1084269 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 11:27:55.432310 1084269 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 11:27:55.439391 1084269 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 11:27:55.446895 1084269 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 11:27:55.446992 1084269 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 11:27:55.454406 1084269 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 11:27:55.494512 1084269 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 11:27:55.494644 1084269 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 11:27:55.574731 1084269 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 11:27:55.574808 1084269 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 11:27:55.574879 1084269 kubeadm.go:319] OS: Linux
	I1213 11:27:55.574940 1084269 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 11:27:55.575002 1084269 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 11:27:55.575057 1084269 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 11:27:55.575108 1084269 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 11:27:55.575161 1084269 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 11:27:55.575213 1084269 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 11:27:55.575262 1084269 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 11:27:55.575313 1084269 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 11:27:55.575363 1084269 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 11:27:55.644997 1084269 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 11:27:55.645126 1084269 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 11:27:55.645224 1084269 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 11:27:55.660993 1084269 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 11:27:55.665482 1084269 out.go:252]   - Generating certificates and keys ...
	I1213 11:27:55.665613 1084269 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 11:27:55.665682 1084269 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 11:27:55.665759 1084269 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 11:27:55.665828 1084269 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1213 11:27:55.665906 1084269 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 11:27:55.666314 1084269 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1213 11:27:55.667279 1084269 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1213 11:27:55.668115 1084269 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1213 11:27:55.668587 1084269 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 11:27:55.669432 1084269 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 11:27:55.670911 1084269 kubeadm.go:319] [certs] Using the existing "sa" key
	I1213 11:27:55.670980 1084269 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 11:27:55.982086 1084269 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 11:27:56.229817 1084269 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 11:27:56.596641 1084269 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 11:27:56.776762 1084269 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 11:27:57.047996 1084269 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 11:27:57.050399 1084269 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 11:27:57.052884 1084269 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 11:27:57.056509 1084269 out.go:252]   - Booting up control plane ...
	I1213 11:27:57.056614 1084269 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 11:27:57.056693 1084269 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 11:27:57.056761 1084269 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 11:27:57.071949 1084269 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 11:27:57.072072 1084269 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 11:27:57.081727 1084269 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 11:27:57.082367 1084269 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 11:27:57.082511 1084269 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 11:27:57.216804 1084269 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 11:27:57.216926 1084269 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 11:31:57.217931 1084269 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001254007s
	I1213 11:31:57.217972 1084269 kubeadm.go:319] 
	I1213 11:31:57.218031 1084269 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 11:31:57.218071 1084269 kubeadm.go:319] 	- The kubelet is not running
	I1213 11:31:57.218181 1084269 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 11:31:57.218191 1084269 kubeadm.go:319] 
	I1213 11:31:57.218296 1084269 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 11:31:57.218331 1084269 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 11:31:57.218369 1084269 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 11:31:57.218377 1084269 kubeadm.go:319] 
	I1213 11:31:57.221642 1084269 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 11:31:57.222126 1084269 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 11:31:57.222260 1084269 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 11:31:57.222504 1084269 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1213 11:31:57.222514 1084269 kubeadm.go:319] 
	I1213 11:31:57.222584 1084269 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1213 11:31:57.222731 1084269 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001254007s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001254007s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1213 11:31:57.222815 1084269 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1213 11:31:57.655072 1084269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 11:31:57.668495 1084269 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 11:31:57.668567 1084269 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 11:31:57.676718 1084269 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 11:31:57.676739 1084269 kubeadm.go:158] found existing configuration files:
	
	I1213 11:31:57.676793 1084269 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 11:31:57.684515 1084269 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 11:31:57.684582 1084269 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 11:31:57.692419 1084269 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 11:31:57.699939 1084269 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 11:31:57.700008 1084269 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 11:31:57.707300 1084269 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 11:31:57.715160 1084269 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 11:31:57.715250 1084269 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 11:31:57.722580 1084269 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 11:31:57.730355 1084269 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 11:31:57.730449 1084269 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 11:31:57.737866 1084269 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 11:31:57.862725 1084269 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 11:31:57.863162 1084269 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 11:31:57.933426 1084269 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 11:35:59.352457 1084269 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1213 11:35:59.352498 1084269 kubeadm.go:319] 
	I1213 11:35:59.352569 1084269 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1213 11:35:59.352726 1084269 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 11:35:59.352790 1084269 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 11:35:59.352887 1084269 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 11:35:59.352947 1084269 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 11:35:59.352986 1084269 kubeadm.go:319] OS: Linux
	I1213 11:35:59.353035 1084269 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 11:35:59.353087 1084269 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 11:35:59.353137 1084269 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 11:35:59.353188 1084269 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 11:35:59.353240 1084269 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 11:35:59.353292 1084269 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 11:35:59.353341 1084269 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 11:35:59.353393 1084269 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 11:35:59.353442 1084269 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 11:35:59.353517 1084269 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 11:35:59.353640 1084269 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 11:35:59.353736 1084269 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 11:35:59.353802 1084269 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 11:35:59.358221 1084269 out.go:252]   - Generating certificates and keys ...
	I1213 11:35:59.358325 1084269 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 11:35:59.358397 1084269 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 11:35:59.358473 1084269 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 11:35:59.358607 1084269 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1213 11:35:59.358726 1084269 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 11:35:59.358810 1084269 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1213 11:35:59.358911 1084269 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1213 11:35:59.359006 1084269 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1213 11:35:59.359340 1084269 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 11:35:59.359467 1084269 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 11:35:59.359560 1084269 kubeadm.go:319] [certs] Using the existing "sa" key
	I1213 11:35:59.359624 1084269 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 11:35:59.359680 1084269 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 11:35:59.359743 1084269 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 11:35:59.359803 1084269 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 11:35:59.359871 1084269 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 11:35:59.359930 1084269 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 11:35:59.360020 1084269 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 11:35:59.360091 1084269 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 11:35:59.363776 1084269 out.go:252]   - Booting up control plane ...
	I1213 11:35:59.363883 1084269 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 11:35:59.363959 1084269 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 11:35:59.364021 1084269 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 11:35:59.364119 1084269 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 11:35:59.364209 1084269 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 11:35:59.364307 1084269 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 11:35:59.364386 1084269 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 11:35:59.364423 1084269 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 11:35:59.364545 1084269 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 11:35:59.364646 1084269 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 11:35:59.364707 1084269 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000648268s
	I1213 11:35:59.364711 1084269 kubeadm.go:319] 
	I1213 11:35:59.364775 1084269 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 11:35:59.364807 1084269 kubeadm.go:319] 	- The kubelet is not running
	I1213 11:35:59.364910 1084269 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 11:35:59.364914 1084269 kubeadm.go:319] 
	I1213 11:35:59.365012 1084269 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 11:35:59.365042 1084269 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 11:35:59.365072 1084269 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 11:35:59.365135 1084269 kubeadm.go:403] duration metric: took 12m8.591698295s to StartCluster
	I1213 11:35:59.365179 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:35:59.365236 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:35:59.365318 1084269 kubeadm.go:319] 
	I1213 11:35:59.409502 1084269 cri.go:89] found id: ""
	I1213 11:35:59.409524 1084269 logs.go:282] 0 containers: []
	W1213 11:35:59.409600 1084269 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:35:59.409608 1084269 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:35:59.409674 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:35:59.456241 1084269 cri.go:89] found id: ""
	I1213 11:35:59.456263 1084269 logs.go:282] 0 containers: []
	W1213 11:35:59.456271 1084269 logs.go:284] No container was found matching "etcd"
	I1213 11:35:59.456278 1084269 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:35:59.456334 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:35:59.486055 1084269 cri.go:89] found id: ""
	I1213 11:35:59.486078 1084269 logs.go:282] 0 containers: []
	W1213 11:35:59.486087 1084269 logs.go:284] No container was found matching "coredns"
	I1213 11:35:59.486094 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:35:59.486152 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:35:59.516292 1084269 cri.go:89] found id: ""
	I1213 11:35:59.516314 1084269 logs.go:282] 0 containers: []
	W1213 11:35:59.516322 1084269 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:35:59.516328 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:35:59.516387 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:35:59.551929 1084269 cri.go:89] found id: ""
	I1213 11:35:59.551952 1084269 logs.go:282] 0 containers: []
	W1213 11:35:59.551961 1084269 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:35:59.551967 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:35:59.552029 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:35:59.588962 1084269 cri.go:89] found id: ""
	I1213 11:35:59.589034 1084269 logs.go:282] 0 containers: []
	W1213 11:35:59.589058 1084269 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:35:59.589078 1084269 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:35:59.589172 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:35:59.630960 1084269 cri.go:89] found id: ""
	I1213 11:35:59.630982 1084269 logs.go:282] 0 containers: []
	W1213 11:35:59.630990 1084269 logs.go:284] No container was found matching "kindnet"
	I1213 11:35:59.630996 1084269 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:35:59.631055 1084269 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:35:59.671917 1084269 cri.go:89] found id: ""
	I1213 11:35:59.671943 1084269 logs.go:282] 0 containers: []
	W1213 11:35:59.671952 1084269 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:35:59.671962 1084269 logs.go:123] Gathering logs for dmesg ...
	I1213 11:35:59.671975 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:35:59.696709 1084269 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:35:59.696805 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:35:59.847202 1084269 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:35:59.847220 1084269 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:35:59.847232 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:35:59.891722 1084269 logs.go:123] Gathering logs for container status ...
	I1213 11:35:59.891796 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:35:59.943212 1084269 logs.go:123] Gathering logs for kubelet ...
	I1213 11:35:59.943237 1084269 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1213 11:36:00.059594 1084269 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000648268s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1213 11:36:00.059730 1084269 out.go:285] * 
	* 
	W1213 11:36:00.059832 1084269 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000648268s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000648268s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 11:36:00.059883 1084269 out.go:285] * 
	* 
	W1213 11:36:00.062153 1084269 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 11:36:00.071302 1084269 out.go:203] 
	W1213 11:36:00.074965 1084269 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000648268s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000648268s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 11:36:00.075128 1084269 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1213 11:36:00.075185 1084269 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1213 11:36:00.093016 1084269 out.go:203] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-linux-arm64 start -p kubernetes-upgrade-060355 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio : exit status 109
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-060355 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-060355 version --output=json: exit status 1 (117.516986ms)

                                                
                                                
-- stdout --
	{
	  "clientVersion": {
	    "major": "1",
	    "minor": "33",
	    "gitVersion": "v1.33.2",
	    "gitCommit": "a57b6f7709f6c2722b92f07b8b4c48210a51fc40",
	    "gitTreeState": "clean",
	    "buildDate": "2025-06-17T18:41:31Z",
	    "goVersion": "go1.24.4",
	    "compiler": "gc",
	    "platform": "linux/arm64"
	  },
	  "kustomizeVersion": "v5.6.0"
	}

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.76.2:8443 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:615: *** TestKubernetesUpgrade FAILED at 2025-12-13 11:36:01.156157148 +0000 UTC m=+5105.633795020
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestKubernetesUpgrade]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect kubernetes-upgrade-060355
helpers_test.go:244: (dbg) docker inspect kubernetes-upgrade-060355:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b22e4cc962ec3a002840b59858c6ec5b87779cc78230cc02b52311589b7acb9e",
	        "Created": "2025-12-13T11:23:03.785844619Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1084393,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T11:23:38.127538368Z",
	            "FinishedAt": "2025-12-13T11:23:36.911498573Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/b22e4cc962ec3a002840b59858c6ec5b87779cc78230cc02b52311589b7acb9e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b22e4cc962ec3a002840b59858c6ec5b87779cc78230cc02b52311589b7acb9e/hostname",
	        "HostsPath": "/var/lib/docker/containers/b22e4cc962ec3a002840b59858c6ec5b87779cc78230cc02b52311589b7acb9e/hosts",
	        "LogPath": "/var/lib/docker/containers/b22e4cc962ec3a002840b59858c6ec5b87779cc78230cc02b52311589b7acb9e/b22e4cc962ec3a002840b59858c6ec5b87779cc78230cc02b52311589b7acb9e-json.log",
	        "Name": "/kubernetes-upgrade-060355",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "kubernetes-upgrade-060355:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "kubernetes-upgrade-060355",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b22e4cc962ec3a002840b59858c6ec5b87779cc78230cc02b52311589b7acb9e",
	                "LowerDir": "/var/lib/docker/overlay2/81e7a0ae1848fd4b33f1c73e6f45d383981f37818edc0bb10526be5fb5cbc358-init/diff:/var/lib/docker/overlay2/ae644fe0cc2841f5eea1cee1fab5fa62406b5368ff2c4f1e7ca42815e94a37ad/diff",
	                "MergedDir": "/var/lib/docker/overlay2/81e7a0ae1848fd4b33f1c73e6f45d383981f37818edc0bb10526be5fb5cbc358/merged",
	                "UpperDir": "/var/lib/docker/overlay2/81e7a0ae1848fd4b33f1c73e6f45d383981f37818edc0bb10526be5fb5cbc358/diff",
	                "WorkDir": "/var/lib/docker/overlay2/81e7a0ae1848fd4b33f1c73e6f45d383981f37818edc0bb10526be5fb5cbc358/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "kubernetes-upgrade-060355",
	                "Source": "/var/lib/docker/volumes/kubernetes-upgrade-060355/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "kubernetes-upgrade-060355",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "kubernetes-upgrade-060355",
	                "name.minikube.sigs.k8s.io": "kubernetes-upgrade-060355",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "18e56259f61358a589daea251c2a27b6ed4e73c0b9173c9fa4a900531ef31646",
	            "SandboxKey": "/var/run/docker/netns/18e56259f613",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33748"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33749"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33752"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33750"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33751"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "kubernetes-upgrade-060355": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ae:05:f0:a0:91:03",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b0c3ed24a8f5ca4673925b720e18bbdc862778df141378e90854ea7bcd74b3d6",
	                    "EndpointID": "cbcc809f783cf9bc2dfe3ed23635cd03f86ec08c48d3df5b6714ceef1975dd9a",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "kubernetes-upgrade-060355",
	                        "b22e4cc962ec"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p kubernetes-upgrade-060355 -n kubernetes-upgrade-060355
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p kubernetes-upgrade-060355 -n kubernetes-upgrade-060355: exit status 2 (448.942204ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-060355 logs -n 25
helpers_test.go:261: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                      ARGS                                                                       │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p NoKubernetes-885378 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                           │ NoKubernetes-885378       │ jenkins │ v1.37.0 │ 13 Dec 25 11:22 UTC │ 13 Dec 25 11:22 UTC │
	│ delete  │ -p NoKubernetes-885378                                                                                                                          │ NoKubernetes-885378       │ jenkins │ v1.37.0 │ 13 Dec 25 11:22 UTC │ 13 Dec 25 11:22 UTC │
	│ start   │ -p NoKubernetes-885378 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                           │ NoKubernetes-885378       │ jenkins │ v1.37.0 │ 13 Dec 25 11:22 UTC │ 13 Dec 25 11:22 UTC │
	│ ssh     │ -p NoKubernetes-885378 sudo systemctl is-active --quiet service kubelet                                                                         │ NoKubernetes-885378       │ jenkins │ v1.37.0 │ 13 Dec 25 11:22 UTC │                     │
	│ stop    │ -p NoKubernetes-885378                                                                                                                          │ NoKubernetes-885378       │ jenkins │ v1.37.0 │ 13 Dec 25 11:22 UTC │ 13 Dec 25 11:22 UTC │
	│ start   │ -p NoKubernetes-885378 --driver=docker  --container-runtime=crio                                                                                │ NoKubernetes-885378       │ jenkins │ v1.37.0 │ 13 Dec 25 11:22 UTC │ 13 Dec 25 11:22 UTC │
	│ start   │ -p missing-upgrade-828630 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                        │ missing-upgrade-828630    │ jenkins │ v1.37.0 │ 13 Dec 25 11:22 UTC │ 13 Dec 25 11:23 UTC │
	│ ssh     │ -p NoKubernetes-885378 sudo systemctl is-active --quiet service kubelet                                                                         │ NoKubernetes-885378       │ jenkins │ v1.37.0 │ 13 Dec 25 11:22 UTC │                     │
	│ delete  │ -p NoKubernetes-885378                                                                                                                          │ NoKubernetes-885378       │ jenkins │ v1.37.0 │ 13 Dec 25 11:22 UTC │ 13 Dec 25 11:22 UTC │
	│ start   │ -p kubernetes-upgrade-060355 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio        │ kubernetes-upgrade-060355 │ jenkins │ v1.37.0 │ 13 Dec 25 11:22 UTC │ 13 Dec 25 11:23 UTC │
	│ stop    │ -p kubernetes-upgrade-060355                                                                                                                    │ kubernetes-upgrade-060355 │ jenkins │ v1.37.0 │ 13 Dec 25 11:23 UTC │ 13 Dec 25 11:23 UTC │
	│ start   │ -p kubernetes-upgrade-060355 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-060355 │ jenkins │ v1.37.0 │ 13 Dec 25 11:23 UTC │                     │
	│ delete  │ -p missing-upgrade-828630                                                                                                                       │ missing-upgrade-828630    │ jenkins │ v1.37.0 │ 13 Dec 25 11:23 UTC │ 13 Dec 25 11:23 UTC │
	│ start   │ -p stopped-upgrade-443186 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                            │ stopped-upgrade-443186    │ jenkins │ v1.35.0 │ 13 Dec 25 11:23 UTC │ 13 Dec 25 11:24 UTC │
	│ stop    │ stopped-upgrade-443186 stop                                                                                                                     │ stopped-upgrade-443186    │ jenkins │ v1.35.0 │ 13 Dec 25 11:24 UTC │ 13 Dec 25 11:24 UTC │
	│ start   │ -p stopped-upgrade-443186 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                        │ stopped-upgrade-443186    │ jenkins │ v1.37.0 │ 13 Dec 25 11:24 UTC │ 13 Dec 25 11:28 UTC │
	│ delete  │ -p stopped-upgrade-443186                                                                                                                       │ stopped-upgrade-443186    │ jenkins │ v1.37.0 │ 13 Dec 25 11:28 UTC │ 13 Dec 25 11:28 UTC │
	│ start   │ -p running-upgrade-161631 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                            │ running-upgrade-161631    │ jenkins │ v1.35.0 │ 13 Dec 25 11:28 UTC │ 13 Dec 25 11:29 UTC │
	│ start   │ -p running-upgrade-161631 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                        │ running-upgrade-161631    │ jenkins │ v1.37.0 │ 13 Dec 25 11:29 UTC │ 13 Dec 25 11:33 UTC │
	│ delete  │ -p running-upgrade-161631                                                                                                                       │ running-upgrade-161631    │ jenkins │ v1.37.0 │ 13 Dec 25 11:33 UTC │ 13 Dec 25 11:33 UTC │
	│ start   │ -p pause-318241 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                       │ pause-318241              │ jenkins │ v1.37.0 │ 13 Dec 25 11:33 UTC │ 13 Dec 25 11:35 UTC │
	│ start   │ -p pause-318241 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                │ pause-318241              │ jenkins │ v1.37.0 │ 13 Dec 25 11:35 UTC │ 13 Dec 25 11:35 UTC │
	│ pause   │ -p pause-318241 --alsologtostderr -v=5                                                                                                          │ pause-318241              │ jenkins │ v1.37.0 │ 13 Dec 25 11:35 UTC │                     │
	│ delete  │ -p pause-318241                                                                                                                                 │ pause-318241              │ jenkins │ v1.37.0 │ 13 Dec 25 11:35 UTC │ 13 Dec 25 11:35 UTC │
	│ start   │ -p force-systemd-flag-770062 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                     │ force-systemd-flag-770062 │ jenkins │ v1.37.0 │ 13 Dec 25 11:36 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 11:36:00
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 11:36:00.096371 1121906 out.go:360] Setting OutFile to fd 1 ...
	I1213 11:36:00.096615 1121906 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:36:00.096649 1121906 out.go:374] Setting ErrFile to fd 2...
	I1213 11:36:00.096677 1121906 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:36:00.097585 1121906 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-904040/.minikube/bin
	I1213 11:36:00.098287 1121906 out.go:368] Setting JSON to false
	I1213 11:36:00.099416 1121906 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":22709,"bootTime":1765603051,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1213 11:36:00.099516 1121906 start.go:143] virtualization:  
	I1213 11:36:00.104514 1121906 out.go:179] * [force-systemd-flag-770062] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 11:36:00.123380 1121906 notify.go:221] Checking for updates...
	I1213 11:36:00.147578 1121906 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 11:36:00.159301 1121906 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 11:36:00.181226 1121906 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22128-904040/kubeconfig
	I1213 11:36:00.184641 1121906 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-904040/.minikube
	I1213 11:36:00.187868 1121906 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 11:36:00.191037 1121906 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 11:36:00.194723 1121906 config.go:182] Loaded profile config "kubernetes-upgrade-060355": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 11:36:00.194848 1121906 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 11:36:00.306953 1121906 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 11:36:00.307108 1121906 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 11:36:00.483963 1121906 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 11:36:00.465799173 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 11:36:00.484085 1121906 docker.go:319] overlay module found
	I1213 11:36:00.487484 1121906 out.go:179] * Using the docker driver based on user configuration
	I1213 11:36:00.490660 1121906 start.go:309] selected driver: docker
	I1213 11:36:00.490693 1121906 start.go:927] validating driver "docker" against <nil>
	I1213 11:36:00.490709 1121906 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 11:36:00.491565 1121906 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 11:36:00.608617 1121906 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 11:36:00.598048631 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 11:36:00.608819 1121906 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1213 11:36:00.609049 1121906 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1213 11:36:00.612101 1121906 out.go:179] * Using Docker driver with root privileges
	I1213 11:36:00.615019 1121906 cni.go:84] Creating CNI manager for ""
	I1213 11:36:00.615089 1121906 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 11:36:00.615100 1121906 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1213 11:36:00.615179 1121906 start.go:353] cluster config:
	{Name:force-systemd-flag-770062 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:force-systemd-flag-770062 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:36:00.618267 1121906 out.go:179] * Starting "force-systemd-flag-770062" primary control-plane node in "force-systemd-flag-770062" cluster
	I1213 11:36:00.621208 1121906 cache.go:134] Beginning downloading kic base image for docker with crio
	I1213 11:36:00.624346 1121906 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 11:36:00.627318 1121906 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 11:36:00.627372 1121906 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22128-904040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1213 11:36:00.627383 1121906 cache.go:65] Caching tarball of preloaded images
	I1213 11:36:00.627383 1121906 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 11:36:00.627472 1121906 preload.go:238] Found /home/jenkins/minikube-integration/22128-904040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1213 11:36:00.627483 1121906 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1213 11:36:00.627591 1121906 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/force-systemd-flag-770062/config.json ...
	I1213 11:36:00.627613 1121906 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/force-systemd-flag-770062/config.json: {Name:mk48c1b62315460821f2bf46fef7d4cc15132c42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:36:00.654607 1121906 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 11:36:00.654632 1121906 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 11:36:00.654649 1121906 cache.go:243] Successfully downloaded all kic artifacts
	I1213 11:36:00.654680 1121906 start.go:360] acquireMachinesLock for force-systemd-flag-770062: {Name:mkb4f559973762556ee1ee2a903f9c9a93d48839 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 11:36:00.654785 1121906 start.go:364] duration metric: took 90.077µs to acquireMachinesLock for "force-systemd-flag-770062"
	I1213 11:36:00.654812 1121906 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-770062 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:force-systemd-flag-770062 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 11:36:00.654882 1121906 start.go:125] createHost starting for "" (driver="docker")
	
	
	==> CRI-O <==
	Dec 13 11:23:45 kubernetes-upgrade-060355 crio[616]: time="2025-12-13T11:23:45.477673924Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 13 11:23:45 kubernetes-upgrade-060355 crio[616]: time="2025-12-13T11:23:45.477841073Z" level=info msg="Starting seccomp notifier watcher"
	Dec 13 11:23:45 kubernetes-upgrade-060355 crio[616]: time="2025-12-13T11:23:45.47794674Z" level=info msg="Create NRI interface"
	Dec 13 11:23:45 kubernetes-upgrade-060355 crio[616]: time="2025-12-13T11:23:45.478116662Z" level=info msg="built-in NRI default validator is disabled"
	Dec 13 11:23:45 kubernetes-upgrade-060355 crio[616]: time="2025-12-13T11:23:45.478179801Z" level=info msg="runtime interface created"
	Dec 13 11:23:45 kubernetes-upgrade-060355 crio[616]: time="2025-12-13T11:23:45.478245418Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 13 11:23:45 kubernetes-upgrade-060355 crio[616]: time="2025-12-13T11:23:45.478301468Z" level=info msg="runtime interface starting up..."
	Dec 13 11:23:45 kubernetes-upgrade-060355 crio[616]: time="2025-12-13T11:23:45.478362794Z" level=info msg="starting plugins..."
	Dec 13 11:23:45 kubernetes-upgrade-060355 crio[616]: time="2025-12-13T11:23:45.478424423Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 13 11:23:45 kubernetes-upgrade-060355 crio[616]: time="2025-12-13T11:23:45.478544359Z" level=info msg="No systemd watchdog enabled"
	Dec 13 11:23:45 kubernetes-upgrade-060355 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 13 11:27:55 kubernetes-upgrade-060355 crio[616]: time="2025-12-13T11:27:55.649970361Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=f823bf76-4b5c-40ea-a3f2-6a34ffc44c4b name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:27:55 kubernetes-upgrade-060355 crio[616]: time="2025-12-13T11:27:55.657772362Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=8dede553-4298-4f3b-bc77-122e5a993179 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:27:55 kubernetes-upgrade-060355 crio[616]: time="2025-12-13T11:27:55.658465728Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=94f720bb-25af-4e23-aee8-dc3b79ad0075 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:27:55 kubernetes-upgrade-060355 crio[616]: time="2025-12-13T11:27:55.658956654Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=b7c40bdd-becf-4984-9649-a8af29e18fe3 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:27:55 kubernetes-upgrade-060355 crio[616]: time="2025-12-13T11:27:55.659369664Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=37b8d344-e385-469c-b989-7c2a2ed9a848 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:27:55 kubernetes-upgrade-060355 crio[616]: time="2025-12-13T11:27:55.65979179Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=3553f469-6484-4376-aa2e-ec5e50aaf8b0 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:27:55 kubernetes-upgrade-060355 crio[616]: time="2025-12-13T11:27:55.660195643Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=60cdfee8-1c99-4a28-a063-0678e42062f5 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:31:57 kubernetes-upgrade-060355 crio[616]: time="2025-12-13T11:31:57.936729925Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=d09ee7a4-1021-4214-af6b-484e07bf068a name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:31:57 kubernetes-upgrade-060355 crio[616]: time="2025-12-13T11:31:57.937409646Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=6ae933f5-0350-4cdd-8c27-793ceb05cbba name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:31:57 kubernetes-upgrade-060355 crio[616]: time="2025-12-13T11:31:57.937962546Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=5a79c4cb-a7b9-4fa9-a8b9-94f22c9fa914 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:31:57 kubernetes-upgrade-060355 crio[616]: time="2025-12-13T11:31:57.938487295Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=d7a7a5da-4e6a-4037-b130-b9993796d3c2 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:31:57 kubernetes-upgrade-060355 crio[616]: time="2025-12-13T11:31:57.939082026Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=43c6d355-acb6-46f1-ba79-d979ded9d5f7 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:31:57 kubernetes-upgrade-060355 crio[616]: time="2025-12-13T11:31:57.939613593Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=dbd4dadf-cdcf-4c5a-9500-fdfd2f7f9d9a name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:31:57 kubernetes-upgrade-060355 crio[616]: time="2025-12-13T11:31:57.940135445Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=046c2154-c6a5-41c4-ae89-2b006356b362 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec13 10:59] overlayfs: idmapped layers are currently not supported
	[Dec13 11:00] overlayfs: idmapped layers are currently not supported
	[Dec13 11:01] overlayfs: idmapped layers are currently not supported
	[  +3.910612] overlayfs: idmapped layers are currently not supported
	[Dec13 11:02] overlayfs: idmapped layers are currently not supported
	[Dec13 11:03] overlayfs: idmapped layers are currently not supported
	[Dec13 11:04] overlayfs: idmapped layers are currently not supported
	[Dec13 11:09] overlayfs: idmapped layers are currently not supported
	[ +31.625971] overlayfs: idmapped layers are currently not supported
	[Dec13 11:10] overlayfs: idmapped layers are currently not supported
	[Dec13 11:12] overlayfs: idmapped layers are currently not supported
	[Dec13 11:13] overlayfs: idmapped layers are currently not supported
	[Dec13 11:14] overlayfs: idmapped layers are currently not supported
	[Dec13 11:15] overlayfs: idmapped layers are currently not supported
	[  +7.705175] overlayfs: idmapped layers are currently not supported
	[Dec13 11:16] overlayfs: idmapped layers are currently not supported
	[ +26.259109] overlayfs: idmapped layers are currently not supported
	[Dec13 11:17] overlayfs: idmapped layers are currently not supported
	[ +22.550073] overlayfs: idmapped layers are currently not supported
	[Dec13 11:18] overlayfs: idmapped layers are currently not supported
	[Dec13 11:20] overlayfs: idmapped layers are currently not supported
	[Dec13 11:22] overlayfs: idmapped layers are currently not supported
	[Dec13 11:23] overlayfs: idmapped layers are currently not supported
	[Dec13 11:31] kauditd_printk_skb: 8 callbacks suppressed
	[Dec13 11:34] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 11:36:02 up  6:18,  0 user,  load average: 2.00, 1.43, 1.61
	Linux kubernetes-upgrade-060355 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 11:36:00 kubernetes-upgrade-060355 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 11:36:00 kubernetes-upgrade-060355 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 13 11:36:00 kubernetes-upgrade-060355 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:36:00 kubernetes-upgrade-060355 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:36:00 kubernetes-upgrade-060355 kubelet[12322]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 11:36:00 kubernetes-upgrade-060355 kubelet[12322]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 11:36:00 kubernetes-upgrade-060355 kubelet[12322]: E1213 11:36:00.781061   12322 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 11:36:00 kubernetes-upgrade-060355 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 11:36:00 kubernetes-upgrade-060355 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 11:36:01 kubernetes-upgrade-060355 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 13 11:36:01 kubernetes-upgrade-060355 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:36:01 kubernetes-upgrade-060355 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:36:01 kubernetes-upgrade-060355 kubelet[12339]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 11:36:01 kubernetes-upgrade-060355 kubelet[12339]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 11:36:01 kubernetes-upgrade-060355 kubelet[12339]: E1213 11:36:01.571540   12339 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 11:36:01 kubernetes-upgrade-060355 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 11:36:01 kubernetes-upgrade-060355 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 11:36:02 kubernetes-upgrade-060355 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 323.
	Dec 13 11:36:02 kubernetes-upgrade-060355 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:36:02 kubernetes-upgrade-060355 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:36:02 kubernetes-upgrade-060355 kubelet[12409]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 11:36:02 kubernetes-upgrade-060355 kubelet[12409]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 11:36:02 kubernetes-upgrade-060355 kubelet[12409]: E1213 11:36:02.301139   12409 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 11:36:02 kubernetes-upgrade-060355 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 11:36:02 kubernetes-upgrade-060355 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p kubernetes-upgrade-060355 -n kubernetes-upgrade-060355
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p kubernetes-upgrade-060355 -n kubernetes-upgrade-060355: exit status 2 (385.348369ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "kubernetes-upgrade-060355" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:176: Cleaning up "kubernetes-upgrade-060355" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-060355
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-060355: (3.784001507s)
--- FAIL: TestKubernetesUpgrade (788.86s)

                                                
                                    
x
+
TestPause/serial/Pause (6.96s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-318241 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p pause-318241 --alsologtostderr -v=5: exit status 80 (2.41033761s)

                                                
                                                
-- stdout --
	* Pausing node pause-318241 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 11:35:50.583025 1120369 out.go:360] Setting OutFile to fd 1 ...
	I1213 11:35:50.583830 1120369 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:35:50.583845 1120369 out.go:374] Setting ErrFile to fd 2...
	I1213 11:35:50.583889 1120369 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:35:50.584155 1120369 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-904040/.minikube/bin
	I1213 11:35:50.584436 1120369 out.go:368] Setting JSON to false
	I1213 11:35:50.584461 1120369 mustload.go:66] Loading cluster: pause-318241
	I1213 11:35:50.585077 1120369 config.go:182] Loaded profile config "pause-318241": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 11:35:50.585651 1120369 cli_runner.go:164] Run: docker container inspect pause-318241 --format={{.State.Status}}
	I1213 11:35:50.603740 1120369 host.go:66] Checking if "pause-318241" exists ...
	I1213 11:35:50.604057 1120369 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 11:35:50.659596 1120369 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-13 11:35:50.649702851 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 11:35:50.660255 1120369 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22101/minikube-v1.37.0-1765481609-22101-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1765481609-22101/minikube-v1.37.0-1765481609-22101-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1765481609-22101-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-318241 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1213 11:35:50.665808 1120369 out.go:179] * Pausing node pause-318241 ... 
	I1213 11:35:50.668689 1120369 host.go:66] Checking if "pause-318241" exists ...
	I1213 11:35:50.669068 1120369 ssh_runner.go:195] Run: systemctl --version
	I1213 11:35:50.669123 1120369 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-318241
	I1213 11:35:50.686371 1120369 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33768 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/pause-318241/id_rsa Username:docker}
	I1213 11:35:50.788165 1120369 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 11:35:50.800658 1120369 pause.go:52] kubelet running: true
	I1213 11:35:50.800726 1120369 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1213 11:35:51.021028 1120369 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1213 11:35:51.021129 1120369 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1213 11:35:51.104150 1120369 cri.go:89] found id: "100faf4b827044c1b3d999090e8900ba9801e797318aecfb111a8a37c5461121"
	I1213 11:35:51.104176 1120369 cri.go:89] found id: "162230a9828f17339fad434e48c14a2c32bfc3edd1fdd6bb4e94598a1483bdb1"
	I1213 11:35:51.104183 1120369 cri.go:89] found id: "e3b0a51b8f9711ceb369990f51edf8acc6d7eabc0f3b773df226f5baac8bd05a"
	I1213 11:35:51.104187 1120369 cri.go:89] found id: "092092a5e6fbfc1509140795d2624c91baaa2f8e8aca4835a3c725f7a0a68236"
	I1213 11:35:51.104190 1120369 cri.go:89] found id: "431e449a50a15c97ddcd5984e57f5efe1f17a676daffe8932f907551ab539972"
	I1213 11:35:51.104194 1120369 cri.go:89] found id: "2c5fd5eef06a9d7b5663c2e3b869895d71b8131c79925bb9982948c1a5f19c3c"
	I1213 11:35:51.104198 1120369 cri.go:89] found id: "d09d707c6449d5c8655c76e36cdbd8a6b6047eb6ae5c89fc89a58a57e3ee51fe"
	I1213 11:35:51.104201 1120369 cri.go:89] found id: "c9267d0dc5c9802552bb682305aef31054c52ed0d89ffe2c3f55b46be31c0a61"
	I1213 11:35:51.104204 1120369 cri.go:89] found id: "5e89137797c50fc5478141518a43dca331d9631eb6f03d208f10aa436870f230"
	I1213 11:35:51.104210 1120369 cri.go:89] found id: "9c006bdb41bb7a06898f7b334f571f0ac179e8b67a52a16eb6af04c6f6fa60c3"
	I1213 11:35:51.104213 1120369 cri.go:89] found id: "eecbb13a8e76e6a23f875edf9d960195425e9af81b62178678927a293d4850bc"
	I1213 11:35:51.104216 1120369 cri.go:89] found id: "3c425b435a3a8e11b491b65d52eeed23fc1ae1c63629808d7746d91310d263e1"
	I1213 11:35:51.104219 1120369 cri.go:89] found id: "48ff4e8f7de76df1c150e6930beee122666c25feafc8e396c52b55a20cfc961c"
	I1213 11:35:51.104222 1120369 cri.go:89] found id: "9297d505fbac54a9bf63d26ce91114ed0f7da1e1f2f1cbb3bbd9de20d75ecec8"
	I1213 11:35:51.104226 1120369 cri.go:89] found id: ""
	I1213 11:35:51.104279 1120369 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 11:35:51.116510 1120369 retry.go:31] will retry after 226.158004ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T11:35:51Z" level=error msg="open /run/runc: no such file or directory"
	I1213 11:35:51.342914 1120369 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 11:35:51.357237 1120369 pause.go:52] kubelet running: false
	I1213 11:35:51.357303 1120369 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1213 11:35:51.505639 1120369 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1213 11:35:51.505764 1120369 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1213 11:35:51.572113 1120369 cri.go:89] found id: "100faf4b827044c1b3d999090e8900ba9801e797318aecfb111a8a37c5461121"
	I1213 11:35:51.572138 1120369 cri.go:89] found id: "162230a9828f17339fad434e48c14a2c32bfc3edd1fdd6bb4e94598a1483bdb1"
	I1213 11:35:51.572144 1120369 cri.go:89] found id: "e3b0a51b8f9711ceb369990f51edf8acc6d7eabc0f3b773df226f5baac8bd05a"
	I1213 11:35:51.572148 1120369 cri.go:89] found id: "092092a5e6fbfc1509140795d2624c91baaa2f8e8aca4835a3c725f7a0a68236"
	I1213 11:35:51.572151 1120369 cri.go:89] found id: "431e449a50a15c97ddcd5984e57f5efe1f17a676daffe8932f907551ab539972"
	I1213 11:35:51.572155 1120369 cri.go:89] found id: "2c5fd5eef06a9d7b5663c2e3b869895d71b8131c79925bb9982948c1a5f19c3c"
	I1213 11:35:51.572158 1120369 cri.go:89] found id: "d09d707c6449d5c8655c76e36cdbd8a6b6047eb6ae5c89fc89a58a57e3ee51fe"
	I1213 11:35:51.572161 1120369 cri.go:89] found id: "c9267d0dc5c9802552bb682305aef31054c52ed0d89ffe2c3f55b46be31c0a61"
	I1213 11:35:51.572165 1120369 cri.go:89] found id: "5e89137797c50fc5478141518a43dca331d9631eb6f03d208f10aa436870f230"
	I1213 11:35:51.572171 1120369 cri.go:89] found id: "9c006bdb41bb7a06898f7b334f571f0ac179e8b67a52a16eb6af04c6f6fa60c3"
	I1213 11:35:51.572174 1120369 cri.go:89] found id: "eecbb13a8e76e6a23f875edf9d960195425e9af81b62178678927a293d4850bc"
	I1213 11:35:51.572177 1120369 cri.go:89] found id: "3c425b435a3a8e11b491b65d52eeed23fc1ae1c63629808d7746d91310d263e1"
	I1213 11:35:51.572180 1120369 cri.go:89] found id: "48ff4e8f7de76df1c150e6930beee122666c25feafc8e396c52b55a20cfc961c"
	I1213 11:35:51.572183 1120369 cri.go:89] found id: "9297d505fbac54a9bf63d26ce91114ed0f7da1e1f2f1cbb3bbd9de20d75ecec8"
	I1213 11:35:51.572186 1120369 cri.go:89] found id: ""
	I1213 11:35:51.572252 1120369 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 11:35:51.583352 1120369 retry.go:31] will retry after 248.107656ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T11:35:51Z" level=error msg="open /run/runc: no such file or directory"
	I1213 11:35:51.831728 1120369 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 11:35:51.844960 1120369 pause.go:52] kubelet running: false
	I1213 11:35:51.845048 1120369 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1213 11:35:51.990154 1120369 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1213 11:35:51.990279 1120369 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1213 11:35:52.063248 1120369 cri.go:89] found id: "100faf4b827044c1b3d999090e8900ba9801e797318aecfb111a8a37c5461121"
	I1213 11:35:52.063280 1120369 cri.go:89] found id: "162230a9828f17339fad434e48c14a2c32bfc3edd1fdd6bb4e94598a1483bdb1"
	I1213 11:35:52.063285 1120369 cri.go:89] found id: "e3b0a51b8f9711ceb369990f51edf8acc6d7eabc0f3b773df226f5baac8bd05a"
	I1213 11:35:52.063290 1120369 cri.go:89] found id: "092092a5e6fbfc1509140795d2624c91baaa2f8e8aca4835a3c725f7a0a68236"
	I1213 11:35:52.063293 1120369 cri.go:89] found id: "431e449a50a15c97ddcd5984e57f5efe1f17a676daffe8932f907551ab539972"
	I1213 11:35:52.063314 1120369 cri.go:89] found id: "2c5fd5eef06a9d7b5663c2e3b869895d71b8131c79925bb9982948c1a5f19c3c"
	I1213 11:35:52.063325 1120369 cri.go:89] found id: "d09d707c6449d5c8655c76e36cdbd8a6b6047eb6ae5c89fc89a58a57e3ee51fe"
	I1213 11:35:52.063339 1120369 cri.go:89] found id: "c9267d0dc5c9802552bb682305aef31054c52ed0d89ffe2c3f55b46be31c0a61"
	I1213 11:35:52.063361 1120369 cri.go:89] found id: "5e89137797c50fc5478141518a43dca331d9631eb6f03d208f10aa436870f230"
	I1213 11:35:52.063386 1120369 cri.go:89] found id: "9c006bdb41bb7a06898f7b334f571f0ac179e8b67a52a16eb6af04c6f6fa60c3"
	I1213 11:35:52.063391 1120369 cri.go:89] found id: "eecbb13a8e76e6a23f875edf9d960195425e9af81b62178678927a293d4850bc"
	I1213 11:35:52.063397 1120369 cri.go:89] found id: "3c425b435a3a8e11b491b65d52eeed23fc1ae1c63629808d7746d91310d263e1"
	I1213 11:35:52.063402 1120369 cri.go:89] found id: "48ff4e8f7de76df1c150e6930beee122666c25feafc8e396c52b55a20cfc961c"
	I1213 11:35:52.063415 1120369 cri.go:89] found id: "9297d505fbac54a9bf63d26ce91114ed0f7da1e1f2f1cbb3bbd9de20d75ecec8"
	I1213 11:35:52.063419 1120369 cri.go:89] found id: ""
	I1213 11:35:52.063485 1120369 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 11:35:52.074963 1120369 retry.go:31] will retry after 610.835164ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T11:35:52Z" level=error msg="open /run/runc: no such file or directory"
	I1213 11:35:52.686873 1120369 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 11:35:52.700195 1120369 pause.go:52] kubelet running: false
	I1213 11:35:52.700293 1120369 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1213 11:35:52.840006 1120369 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1213 11:35:52.840098 1120369 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1213 11:35:52.905457 1120369 cri.go:89] found id: "100faf4b827044c1b3d999090e8900ba9801e797318aecfb111a8a37c5461121"
	I1213 11:35:52.905480 1120369 cri.go:89] found id: "162230a9828f17339fad434e48c14a2c32bfc3edd1fdd6bb4e94598a1483bdb1"
	I1213 11:35:52.905485 1120369 cri.go:89] found id: "e3b0a51b8f9711ceb369990f51edf8acc6d7eabc0f3b773df226f5baac8bd05a"
	I1213 11:35:52.905490 1120369 cri.go:89] found id: "092092a5e6fbfc1509140795d2624c91baaa2f8e8aca4835a3c725f7a0a68236"
	I1213 11:35:52.905493 1120369 cri.go:89] found id: "431e449a50a15c97ddcd5984e57f5efe1f17a676daffe8932f907551ab539972"
	I1213 11:35:52.905497 1120369 cri.go:89] found id: "2c5fd5eef06a9d7b5663c2e3b869895d71b8131c79925bb9982948c1a5f19c3c"
	I1213 11:35:52.905500 1120369 cri.go:89] found id: "d09d707c6449d5c8655c76e36cdbd8a6b6047eb6ae5c89fc89a58a57e3ee51fe"
	I1213 11:35:52.905504 1120369 cri.go:89] found id: "c9267d0dc5c9802552bb682305aef31054c52ed0d89ffe2c3f55b46be31c0a61"
	I1213 11:35:52.905506 1120369 cri.go:89] found id: "5e89137797c50fc5478141518a43dca331d9631eb6f03d208f10aa436870f230"
	I1213 11:35:52.905516 1120369 cri.go:89] found id: "9c006bdb41bb7a06898f7b334f571f0ac179e8b67a52a16eb6af04c6f6fa60c3"
	I1213 11:35:52.905519 1120369 cri.go:89] found id: "eecbb13a8e76e6a23f875edf9d960195425e9af81b62178678927a293d4850bc"
	I1213 11:35:52.905523 1120369 cri.go:89] found id: "3c425b435a3a8e11b491b65d52eeed23fc1ae1c63629808d7746d91310d263e1"
	I1213 11:35:52.905553 1120369 cri.go:89] found id: "48ff4e8f7de76df1c150e6930beee122666c25feafc8e396c52b55a20cfc961c"
	I1213 11:35:52.905560 1120369 cri.go:89] found id: "9297d505fbac54a9bf63d26ce91114ed0f7da1e1f2f1cbb3bbd9de20d75ecec8"
	I1213 11:35:52.905563 1120369 cri.go:89] found id: ""
	I1213 11:35:52.905624 1120369 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 11:35:52.920288 1120369 out.go:203] 
	W1213 11:35:52.923401 1120369 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T11:35:52Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T11:35:52Z" level=error msg="open /run/runc: no such file or directory"
	
	W1213 11:35:52.923470 1120369 out.go:285] * 
	* 
	W1213 11:35:52.931790 1120369 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 11:35:52.934732 1120369 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-arm64 pause -p pause-318241 --alsologtostderr -v=5" : exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect pause-318241
helpers_test.go:244: (dbg) docker inspect pause-318241:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "99a18544a4b1fc4ef86e81626534631c38ee091240b7dfc3c539fc1894dee201",
	        "Created": "2025-12-13T11:34:05.040362913Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1116449,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T11:34:05.10899505Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/99a18544a4b1fc4ef86e81626534631c38ee091240b7dfc3c539fc1894dee201/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/99a18544a4b1fc4ef86e81626534631c38ee091240b7dfc3c539fc1894dee201/hostname",
	        "HostsPath": "/var/lib/docker/containers/99a18544a4b1fc4ef86e81626534631c38ee091240b7dfc3c539fc1894dee201/hosts",
	        "LogPath": "/var/lib/docker/containers/99a18544a4b1fc4ef86e81626534631c38ee091240b7dfc3c539fc1894dee201/99a18544a4b1fc4ef86e81626534631c38ee091240b7dfc3c539fc1894dee201-json.log",
	        "Name": "/pause-318241",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-318241:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-318241",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "99a18544a4b1fc4ef86e81626534631c38ee091240b7dfc3c539fc1894dee201",
	                "LowerDir": "/var/lib/docker/overlay2/d25e78a84d4d4de9fba3461ab1d6129284929ad56d829118383bc43a18453ede-init/diff:/var/lib/docker/overlay2/ae644fe0cc2841f5eea1cee1fab5fa62406b5368ff2c4f1e7ca42815e94a37ad/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d25e78a84d4d4de9fba3461ab1d6129284929ad56d829118383bc43a18453ede/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d25e78a84d4d4de9fba3461ab1d6129284929ad56d829118383bc43a18453ede/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d25e78a84d4d4de9fba3461ab1d6129284929ad56d829118383bc43a18453ede/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-318241",
	                "Source": "/var/lib/docker/volumes/pause-318241/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-318241",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-318241",
	                "name.minikube.sigs.k8s.io": "pause-318241",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1d9431fa8955751714218c6240013195119174924fb95e8531af90275817bcdb",
	            "SandboxKey": "/var/run/docker/netns/1d9431fa8955",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-318241": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "7a:6e:ef:a8:c4:6b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6936a9e18475ad8505cbd530811f4f4957e48bb9c3fdaa7604b330d6ce314f4e",
	                    "EndpointID": "44ec637ac2e8e61997ab22ae33e9a005e9715815ddb7cfc7e813a0fd2b163d12",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-318241",
	                        "99a18544a4b1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-318241 -n pause-318241
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-318241 -n pause-318241: exit status 2 (376.409466ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p pause-318241 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p pause-318241 logs -n 25: (1.345187297s)
helpers_test.go:261: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                      ARGS                                                                       │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p NoKubernetes-885378 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                           │ NoKubernetes-885378       │ jenkins │ v1.37.0 │ 13 Dec 25 11:21 UTC │ 13 Dec 25 11:22 UTC │
	│ start   │ -p missing-upgrade-828630 --memory=3072 --driver=docker  --container-runtime=crio                                                               │ missing-upgrade-828630    │ jenkins │ v1.35.0 │ 13 Dec 25 11:21 UTC │ 13 Dec 25 11:22 UTC │
	│ start   │ -p NoKubernetes-885378 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                           │ NoKubernetes-885378       │ jenkins │ v1.37.0 │ 13 Dec 25 11:22 UTC │ 13 Dec 25 11:22 UTC │
	│ delete  │ -p NoKubernetes-885378                                                                                                                          │ NoKubernetes-885378       │ jenkins │ v1.37.0 │ 13 Dec 25 11:22 UTC │ 13 Dec 25 11:22 UTC │
	│ start   │ -p NoKubernetes-885378 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                           │ NoKubernetes-885378       │ jenkins │ v1.37.0 │ 13 Dec 25 11:22 UTC │ 13 Dec 25 11:22 UTC │
	│ ssh     │ -p NoKubernetes-885378 sudo systemctl is-active --quiet service kubelet                                                                         │ NoKubernetes-885378       │ jenkins │ v1.37.0 │ 13 Dec 25 11:22 UTC │                     │
	│ stop    │ -p NoKubernetes-885378                                                                                                                          │ NoKubernetes-885378       │ jenkins │ v1.37.0 │ 13 Dec 25 11:22 UTC │ 13 Dec 25 11:22 UTC │
	│ start   │ -p NoKubernetes-885378 --driver=docker  --container-runtime=crio                                                                                │ NoKubernetes-885378       │ jenkins │ v1.37.0 │ 13 Dec 25 11:22 UTC │ 13 Dec 25 11:22 UTC │
	│ start   │ -p missing-upgrade-828630 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                        │ missing-upgrade-828630    │ jenkins │ v1.37.0 │ 13 Dec 25 11:22 UTC │ 13 Dec 25 11:23 UTC │
	│ ssh     │ -p NoKubernetes-885378 sudo systemctl is-active --quiet service kubelet                                                                         │ NoKubernetes-885378       │ jenkins │ v1.37.0 │ 13 Dec 25 11:22 UTC │                     │
	│ delete  │ -p NoKubernetes-885378                                                                                                                          │ NoKubernetes-885378       │ jenkins │ v1.37.0 │ 13 Dec 25 11:22 UTC │ 13 Dec 25 11:22 UTC │
	│ start   │ -p kubernetes-upgrade-060355 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio        │ kubernetes-upgrade-060355 │ jenkins │ v1.37.0 │ 13 Dec 25 11:22 UTC │ 13 Dec 25 11:23 UTC │
	│ stop    │ -p kubernetes-upgrade-060355                                                                                                                    │ kubernetes-upgrade-060355 │ jenkins │ v1.37.0 │ 13 Dec 25 11:23 UTC │ 13 Dec 25 11:23 UTC │
	│ start   │ -p kubernetes-upgrade-060355 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-060355 │ jenkins │ v1.37.0 │ 13 Dec 25 11:23 UTC │                     │
	│ delete  │ -p missing-upgrade-828630                                                                                                                       │ missing-upgrade-828630    │ jenkins │ v1.37.0 │ 13 Dec 25 11:23 UTC │ 13 Dec 25 11:23 UTC │
	│ start   │ -p stopped-upgrade-443186 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                            │ stopped-upgrade-443186    │ jenkins │ v1.35.0 │ 13 Dec 25 11:23 UTC │ 13 Dec 25 11:24 UTC │
	│ stop    │ stopped-upgrade-443186 stop                                                                                                                     │ stopped-upgrade-443186    │ jenkins │ v1.35.0 │ 13 Dec 25 11:24 UTC │ 13 Dec 25 11:24 UTC │
	│ start   │ -p stopped-upgrade-443186 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                        │ stopped-upgrade-443186    │ jenkins │ v1.37.0 │ 13 Dec 25 11:24 UTC │ 13 Dec 25 11:28 UTC │
	│ delete  │ -p stopped-upgrade-443186                                                                                                                       │ stopped-upgrade-443186    │ jenkins │ v1.37.0 │ 13 Dec 25 11:28 UTC │ 13 Dec 25 11:28 UTC │
	│ start   │ -p running-upgrade-161631 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                            │ running-upgrade-161631    │ jenkins │ v1.35.0 │ 13 Dec 25 11:28 UTC │ 13 Dec 25 11:29 UTC │
	│ start   │ -p running-upgrade-161631 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                        │ running-upgrade-161631    │ jenkins │ v1.37.0 │ 13 Dec 25 11:29 UTC │ 13 Dec 25 11:33 UTC │
	│ delete  │ -p running-upgrade-161631                                                                                                                       │ running-upgrade-161631    │ jenkins │ v1.37.0 │ 13 Dec 25 11:33 UTC │ 13 Dec 25 11:33 UTC │
	│ start   │ -p pause-318241 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                       │ pause-318241              │ jenkins │ v1.37.0 │ 13 Dec 25 11:33 UTC │ 13 Dec 25 11:35 UTC │
	│ start   │ -p pause-318241 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                │ pause-318241              │ jenkins │ v1.37.0 │ 13 Dec 25 11:35 UTC │ 13 Dec 25 11:35 UTC │
	│ pause   │ -p pause-318241 --alsologtostderr -v=5                                                                                                          │ pause-318241              │ jenkins │ v1.37.0 │ 13 Dec 25 11:35 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 11:35:22
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 11:35:22.270391 1119052 out.go:360] Setting OutFile to fd 1 ...
	I1213 11:35:22.278389 1119052 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:35:22.278548 1119052 out.go:374] Setting ErrFile to fd 2...
	I1213 11:35:22.278571 1119052 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:35:22.278888 1119052 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-904040/.minikube/bin
	I1213 11:35:22.279397 1119052 out.go:368] Setting JSON to false
	I1213 11:35:22.280533 1119052 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":22672,"bootTime":1765603051,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1213 11:35:22.281360 1119052 start.go:143] virtualization:  
	I1213 11:35:22.284456 1119052 out.go:179] * [pause-318241] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 11:35:22.289049 1119052 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 11:35:22.289063 1119052 notify.go:221] Checking for updates...
	I1213 11:35:22.292983 1119052 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 11:35:22.296166 1119052 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22128-904040/kubeconfig
	I1213 11:35:22.299058 1119052 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-904040/.minikube
	I1213 11:35:22.301948 1119052 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 11:35:22.304894 1119052 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 11:35:22.308301 1119052 config.go:182] Loaded profile config "pause-318241": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 11:35:22.309092 1119052 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 11:35:22.331532 1119052 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 11:35:22.331647 1119052 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 11:35:22.398328 1119052 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-13 11:35:22.388656933 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 11:35:22.398446 1119052 docker.go:319] overlay module found
	I1213 11:35:22.401601 1119052 out.go:179] * Using the docker driver based on existing profile
	I1213 11:35:22.404405 1119052 start.go:309] selected driver: docker
	I1213 11:35:22.404427 1119052 start.go:927] validating driver "docker" against &{Name:pause-318241 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-318241 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:35:22.404558 1119052 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 11:35:22.404674 1119052 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 11:35:22.467543 1119052 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-13 11:35:22.45765305 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 11:35:22.467997 1119052 cni.go:84] Creating CNI manager for ""
	I1213 11:35:22.468067 1119052 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 11:35:22.468135 1119052 start.go:353] cluster config:
	{Name:pause-318241 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-318241 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:35:22.471301 1119052 out.go:179] * Starting "pause-318241" primary control-plane node in "pause-318241" cluster
	I1213 11:35:22.477374 1119052 cache.go:134] Beginning downloading kic base image for docker with crio
	I1213 11:35:22.480357 1119052 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 11:35:22.483356 1119052 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 11:35:22.483412 1119052 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22128-904040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1213 11:35:22.483423 1119052 cache.go:65] Caching tarball of preloaded images
	I1213 11:35:22.483523 1119052 preload.go:238] Found /home/jenkins/minikube-integration/22128-904040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1213 11:35:22.483547 1119052 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1213 11:35:22.483689 1119052 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/pause-318241/config.json ...
	I1213 11:35:22.483942 1119052 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 11:35:22.511850 1119052 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 11:35:22.511870 1119052 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 11:35:22.511883 1119052 cache.go:243] Successfully downloaded all kic artifacts
	I1213 11:35:22.511918 1119052 start.go:360] acquireMachinesLock for pause-318241: {Name:mkbfa445139c4dfc6002d6ff5760c7517527f5e7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 11:35:22.511973 1119052 start.go:364] duration metric: took 37.932µs to acquireMachinesLock for "pause-318241"
	I1213 11:35:22.511993 1119052 start.go:96] Skipping create...Using existing machine configuration
	I1213 11:35:22.511998 1119052 fix.go:54] fixHost starting: 
	I1213 11:35:22.512265 1119052 cli_runner.go:164] Run: docker container inspect pause-318241 --format={{.State.Status}}
	I1213 11:35:22.534695 1119052 fix.go:112] recreateIfNeeded on pause-318241: state=Running err=<nil>
	W1213 11:35:22.534729 1119052 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 11:35:22.537936 1119052 out.go:252] * Updating the running docker "pause-318241" container ...
	I1213 11:35:22.537971 1119052 machine.go:94] provisionDockerMachine start ...
	I1213 11:35:22.538058 1119052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-318241
	I1213 11:35:22.559551 1119052 main.go:143] libmachine: Using SSH client type: native
	I1213 11:35:22.559877 1119052 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33768 <nil> <nil>}
	I1213 11:35:22.559887 1119052 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 11:35:22.713106 1119052 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-318241
	
	I1213 11:35:22.713137 1119052 ubuntu.go:182] provisioning hostname "pause-318241"
	I1213 11:35:22.713206 1119052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-318241
	I1213 11:35:22.730988 1119052 main.go:143] libmachine: Using SSH client type: native
	I1213 11:35:22.731322 1119052 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33768 <nil> <nil>}
	I1213 11:35:22.731347 1119052 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-318241 && echo "pause-318241" | sudo tee /etc/hostname
	I1213 11:35:22.895627 1119052 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-318241
	
	I1213 11:35:22.895734 1119052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-318241
	I1213 11:35:22.918298 1119052 main.go:143] libmachine: Using SSH client type: native
	I1213 11:35:22.918623 1119052 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33768 <nil> <nil>}
	I1213 11:35:22.918646 1119052 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-318241' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-318241/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-318241' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 11:35:23.073983 1119052 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 11:35:23.074073 1119052 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22128-904040/.minikube CaCertPath:/home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22128-904040/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22128-904040/.minikube}
	I1213 11:35:23.074125 1119052 ubuntu.go:190] setting up certificates
	I1213 11:35:23.074158 1119052 provision.go:84] configureAuth start
	I1213 11:35:23.074255 1119052 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-318241
	I1213 11:35:23.092495 1119052 provision.go:143] copyHostCerts
	I1213 11:35:23.092571 1119052 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-904040/.minikube/ca.pem, removing ...
	I1213 11:35:23.092580 1119052 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-904040/.minikube/ca.pem
	I1213 11:35:23.092653 1119052 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22128-904040/.minikube/ca.pem (1082 bytes)
	I1213 11:35:23.092751 1119052 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-904040/.minikube/cert.pem, removing ...
	I1213 11:35:23.092763 1119052 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-904040/.minikube/cert.pem
	I1213 11:35:23.092790 1119052 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22128-904040/.minikube/cert.pem (1123 bytes)
	I1213 11:35:23.092877 1119052 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-904040/.minikube/key.pem, removing ...
	I1213 11:35:23.092883 1119052 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-904040/.minikube/key.pem
	I1213 11:35:23.092912 1119052 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22128-904040/.minikube/key.pem (1675 bytes)
	I1213 11:35:23.092962 1119052 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22128-904040/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca-key.pem org=jenkins.pause-318241 san=[127.0.0.1 192.168.85.2 localhost minikube pause-318241]
	I1213 11:35:23.229482 1119052 provision.go:177] copyRemoteCerts
	I1213 11:35:23.229608 1119052 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 11:35:23.229668 1119052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-318241
	I1213 11:35:23.251873 1119052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33768 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/pause-318241/id_rsa Username:docker}
	I1213 11:35:23.369516 1119052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 11:35:23.387144 1119052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1213 11:35:23.405593 1119052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 11:35:23.423216 1119052 provision.go:87] duration metric: took 349.014314ms to configureAuth
	I1213 11:35:23.423243 1119052 ubuntu.go:206] setting minikube options for container-runtime
	I1213 11:35:23.423470 1119052 config.go:182] Loaded profile config "pause-318241": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 11:35:23.423589 1119052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-318241
	I1213 11:35:23.442305 1119052 main.go:143] libmachine: Using SSH client type: native
	I1213 11:35:23.442628 1119052 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33768 <nil> <nil>}
	I1213 11:35:23.442652 1119052 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 11:35:28.826935 1119052 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 11:35:28.826958 1119052 machine.go:97] duration metric: took 6.288977988s to provisionDockerMachine
	I1213 11:35:28.826971 1119052 start.go:293] postStartSetup for "pause-318241" (driver="docker")
	I1213 11:35:28.826982 1119052 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 11:35:28.827048 1119052 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 11:35:28.827109 1119052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-318241
	I1213 11:35:28.846063 1119052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33768 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/pause-318241/id_rsa Username:docker}
	I1213 11:35:28.953554 1119052 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 11:35:28.956915 1119052 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 11:35:28.956945 1119052 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 11:35:28.956959 1119052 filesync.go:126] Scanning /home/jenkins/minikube-integration/22128-904040/.minikube/addons for local assets ...
	I1213 11:35:28.957015 1119052 filesync.go:126] Scanning /home/jenkins/minikube-integration/22128-904040/.minikube/files for local assets ...
	I1213 11:35:28.957099 1119052 filesync.go:149] local asset: /home/jenkins/minikube-integration/22128-904040/.minikube/files/etc/ssl/certs/9074842.pem -> 9074842.pem in /etc/ssl/certs
	I1213 11:35:28.957207 1119052 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 11:35:28.964665 1119052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/files/etc/ssl/certs/9074842.pem --> /etc/ssl/certs/9074842.pem (1708 bytes)
	I1213 11:35:28.982982 1119052 start.go:296] duration metric: took 155.994925ms for postStartSetup
	I1213 11:35:28.983065 1119052 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 11:35:28.983128 1119052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-318241
	I1213 11:35:29.001385 1119052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33768 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/pause-318241/id_rsa Username:docker}
	I1213 11:35:29.106990 1119052 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 11:35:29.112299 1119052 fix.go:56] duration metric: took 6.600280102s for fixHost
	I1213 11:35:29.112330 1119052 start.go:83] releasing machines lock for "pause-318241", held for 6.600348419s
	I1213 11:35:29.112426 1119052 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-318241
	I1213 11:35:29.140287 1119052 ssh_runner.go:195] Run: cat /version.json
	I1213 11:35:29.140332 1119052 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 11:35:29.140345 1119052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-318241
	I1213 11:35:29.140399 1119052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-318241
	I1213 11:35:29.159219 1119052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33768 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/pause-318241/id_rsa Username:docker}
	I1213 11:35:29.161253 1119052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33768 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/pause-318241/id_rsa Username:docker}
	I1213 11:35:29.265631 1119052 ssh_runner.go:195] Run: systemctl --version
	I1213 11:35:29.363868 1119052 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 11:35:29.409048 1119052 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 11:35:29.414699 1119052 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 11:35:29.414827 1119052 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 11:35:29.422782 1119052 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 11:35:29.422808 1119052 start.go:496] detecting cgroup driver to use...
	I1213 11:35:29.422860 1119052 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 11:35:29.422937 1119052 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 11:35:29.439399 1119052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 11:35:29.452937 1119052 docker.go:218] disabling cri-docker service (if available) ...
	I1213 11:35:29.453001 1119052 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 11:35:29.470853 1119052 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 11:35:29.484206 1119052 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 11:35:29.629762 1119052 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 11:35:29.771591 1119052 docker.go:234] disabling docker service ...
	I1213 11:35:29.771726 1119052 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 11:35:29.786515 1119052 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 11:35:29.799962 1119052 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 11:35:29.941053 1119052 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 11:35:30.120096 1119052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 11:35:30.136397 1119052 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 11:35:30.154347 1119052 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 11:35:30.154479 1119052 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:35:30.164961 1119052 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 11:35:30.165093 1119052 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:35:30.175512 1119052 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:35:30.185082 1119052 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:35:30.194864 1119052 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 11:35:30.203560 1119052 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:35:30.213309 1119052 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:35:30.222486 1119052 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:35:30.231605 1119052 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 11:35:30.239491 1119052 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 11:35:30.247402 1119052 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:35:30.385703 1119052 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 11:35:30.596707 1119052 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 11:35:30.596782 1119052 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 11:35:30.600592 1119052 start.go:564] Will wait 60s for crictl version
	I1213 11:35:30.600756 1119052 ssh_runner.go:195] Run: which crictl
	I1213 11:35:30.604239 1119052 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 11:35:30.628032 1119052 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1213 11:35:30.628161 1119052 ssh_runner.go:195] Run: crio --version
	I1213 11:35:30.657427 1119052 ssh_runner.go:195] Run: crio --version
	I1213 11:35:30.694200 1119052 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1213 11:35:30.697365 1119052 cli_runner.go:164] Run: docker network inspect pause-318241 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 11:35:30.713360 1119052 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1213 11:35:30.717316 1119052 kubeadm.go:884] updating cluster {Name:pause-318241 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-318241 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 11:35:30.717458 1119052 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 11:35:30.717511 1119052 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 11:35:30.760464 1119052 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 11:35:30.760487 1119052 crio.go:433] Images already preloaded, skipping extraction
	I1213 11:35:30.760553 1119052 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 11:35:30.791999 1119052 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 11:35:30.792026 1119052 cache_images.go:86] Images are preloaded, skipping loading
	I1213 11:35:30.792034 1119052 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.2 crio true true} ...
	I1213 11:35:30.792142 1119052 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-318241 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:pause-318241 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 11:35:30.792235 1119052 ssh_runner.go:195] Run: crio config
	I1213 11:35:30.871353 1119052 cni.go:84] Creating CNI manager for ""
	I1213 11:35:30.871434 1119052 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 11:35:30.871472 1119052 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 11:35:30.871510 1119052 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-318241 NodeName:pause-318241 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 11:35:30.871666 1119052 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-318241"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 11:35:30.871778 1119052 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1213 11:35:30.879877 1119052 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 11:35:30.879954 1119052 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 11:35:30.887862 1119052 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1213 11:35:30.901565 1119052 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 11:35:30.915547 1119052 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1213 11:35:30.928617 1119052 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1213 11:35:30.932501 1119052 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:35:31.061531 1119052 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 11:35:31.076242 1119052 certs.go:69] Setting up /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/pause-318241 for IP: 192.168.85.2
	I1213 11:35:31.076264 1119052 certs.go:195] generating shared ca certs ...
	I1213 11:35:31.076282 1119052 certs.go:227] acquiring lock for ca certs: {Name:mk8a4f8a0a31c02fdf751ce601bdbbea6f5a03e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:35:31.076431 1119052 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22128-904040/.minikube/ca.key
	I1213 11:35:31.076484 1119052 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22128-904040/.minikube/proxy-client-ca.key
	I1213 11:35:31.076497 1119052 certs.go:257] generating profile certs ...
	I1213 11:35:31.076593 1119052 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/pause-318241/client.key
	I1213 11:35:31.076674 1119052 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/pause-318241/apiserver.key.45c3d61f
	I1213 11:35:31.076759 1119052 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/pause-318241/proxy-client.key
	I1213 11:35:31.076898 1119052 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/907484.pem (1338 bytes)
	W1213 11:35:31.076945 1119052 certs.go:480] ignoring /home/jenkins/minikube-integration/22128-904040/.minikube/certs/907484_empty.pem, impossibly tiny 0 bytes
	I1213 11:35:31.076959 1119052 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 11:35:31.076989 1119052 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca.pem (1082 bytes)
	I1213 11:35:31.077018 1119052 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/cert.pem (1123 bytes)
	I1213 11:35:31.077048 1119052 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/key.pem (1675 bytes)
	I1213 11:35:31.077099 1119052 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-904040/.minikube/files/etc/ssl/certs/9074842.pem (1708 bytes)
	I1213 11:35:31.077796 1119052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 11:35:31.097522 1119052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 11:35:31.117118 1119052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 11:35:31.136729 1119052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 11:35:31.155950 1119052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/pause-318241/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1213 11:35:31.174504 1119052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/pause-318241/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 11:35:31.192150 1119052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/pause-318241/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 11:35:31.209765 1119052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/pause-318241/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 11:35:31.227756 1119052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/certs/907484.pem --> /usr/share/ca-certificates/907484.pem (1338 bytes)
	I1213 11:35:31.245040 1119052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/files/etc/ssl/certs/9074842.pem --> /usr/share/ca-certificates/9074842.pem (1708 bytes)
	I1213 11:35:31.263804 1119052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 11:35:31.281616 1119052 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 11:35:31.293769 1119052 ssh_runner.go:195] Run: openssl version
	I1213 11:35:31.300088 1119052 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/907484.pem
	I1213 11:35:31.307331 1119052 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/907484.pem /etc/ssl/certs/907484.pem
	I1213 11:35:31.314654 1119052 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/907484.pem
	I1213 11:35:31.318358 1119052 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 10:21 /usr/share/ca-certificates/907484.pem
	I1213 11:35:31.318430 1119052 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/907484.pem
	I1213 11:35:31.359104 1119052 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 11:35:31.366426 1119052 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9074842.pem
	I1213 11:35:31.373617 1119052 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9074842.pem /etc/ssl/certs/9074842.pem
	I1213 11:35:31.380828 1119052 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9074842.pem
	I1213 11:35:31.384602 1119052 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 10:21 /usr/share/ca-certificates/9074842.pem
	I1213 11:35:31.384670 1119052 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9074842.pem
	I1213 11:35:31.425866 1119052 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 11:35:31.433263 1119052 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:35:31.440734 1119052 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 11:35:31.448145 1119052 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:35:31.451720 1119052 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 10:11 /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:35:31.451787 1119052 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:35:31.493196 1119052 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 11:35:31.500961 1119052 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 11:35:31.505419 1119052 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 11:35:31.547311 1119052 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 11:35:31.589448 1119052 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 11:35:31.630220 1119052 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 11:35:31.670943 1119052 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 11:35:31.711458 1119052 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 11:35:31.751747 1119052 kubeadm.go:401] StartCluster: {Name:pause-318241 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-318241 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:35:31.751876 1119052 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 11:35:31.751951 1119052 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 11:35:31.781827 1119052 cri.go:89] found id: "c9267d0dc5c9802552bb682305aef31054c52ed0d89ffe2c3f55b46be31c0a61"
	I1213 11:35:31.781848 1119052 cri.go:89] found id: "5e89137797c50fc5478141518a43dca331d9631eb6f03d208f10aa436870f230"
	I1213 11:35:31.781854 1119052 cri.go:89] found id: "9c006bdb41bb7a06898f7b334f571f0ac179e8b67a52a16eb6af04c6f6fa60c3"
	I1213 11:35:31.781858 1119052 cri.go:89] found id: "eecbb13a8e76e6a23f875edf9d960195425e9af81b62178678927a293d4850bc"
	I1213 11:35:31.781862 1119052 cri.go:89] found id: "3c425b435a3a8e11b491b65d52eeed23fc1ae1c63629808d7746d91310d263e1"
	I1213 11:35:31.781866 1119052 cri.go:89] found id: "48ff4e8f7de76df1c150e6930beee122666c25feafc8e396c52b55a20cfc961c"
	I1213 11:35:31.781869 1119052 cri.go:89] found id: "9297d505fbac54a9bf63d26ce91114ed0f7da1e1f2f1cbb3bbd9de20d75ecec8"
	I1213 11:35:31.781873 1119052 cri.go:89] found id: ""
	I1213 11:35:31.781925 1119052 ssh_runner.go:195] Run: sudo runc list -f json
	W1213 11:35:31.795227 1119052 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T11:35:31Z" level=error msg="open /run/runc: no such file or directory"
	I1213 11:35:31.795299 1119052 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 11:35:31.802945 1119052 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 11:35:31.803018 1119052 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 11:35:31.803076 1119052 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 11:35:31.810586 1119052 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 11:35:31.811285 1119052 kubeconfig.go:125] found "pause-318241" server: "https://192.168.85.2:8443"
	I1213 11:35:31.812051 1119052 kapi.go:59] client config for pause-318241: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22128-904040/.minikube/profiles/pause-318241/client.crt", KeyFile:"/home/jenkins/minikube-integration/22128-904040/.minikube/profiles/pause-318241/client.key", CAFile:"/home/jenkins/minikube-integration/22128-904040/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 11:35:31.812558 1119052 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1213 11:35:31.812582 1119052 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1213 11:35:31.812587 1119052 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1213 11:35:31.812592 1119052 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1213 11:35:31.812600 1119052 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1213 11:35:31.812851 1119052 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 11:35:31.820421 1119052 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1213 11:35:31.820452 1119052 kubeadm.go:602] duration metric: took 17.420235ms to restartPrimaryControlPlane
	I1213 11:35:31.820463 1119052 kubeadm.go:403] duration metric: took 68.72565ms to StartCluster
	I1213 11:35:31.820478 1119052 settings.go:142] acquiring lock: {Name:mk93988d167ba25bb331a8426f9b2f4ef25dd844 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:35:31.820537 1119052 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22128-904040/kubeconfig
	I1213 11:35:31.821421 1119052 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-904040/kubeconfig: {Name:mk623f80012ba74b924bdfcf4e2ec5178c2702f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:35:31.821669 1119052 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 11:35:31.822014 1119052 config.go:182] Loaded profile config "pause-318241": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 11:35:31.822064 1119052 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 11:35:31.827356 1119052 out.go:179] * Enabled addons: 
	I1213 11:35:31.827364 1119052 out.go:179] * Verifying Kubernetes components...
	I1213 11:35:31.830027 1119052 addons.go:530] duration metric: took 7.960063ms for enable addons: enabled=[]
	I1213 11:35:31.830114 1119052 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:35:31.956307 1119052 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 11:35:31.969701 1119052 node_ready.go:35] waiting up to 6m0s for node "pause-318241" to be "Ready" ...
	I1213 11:35:36.548509 1119052 node_ready.go:49] node "pause-318241" is "Ready"
	I1213 11:35:36.548542 1119052 node_ready.go:38] duration metric: took 4.578805955s for node "pause-318241" to be "Ready" ...
	I1213 11:35:36.548556 1119052 api_server.go:52] waiting for apiserver process to appear ...
	I1213 11:35:36.548616 1119052 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:35:36.568079 1119052 api_server.go:72] duration metric: took 4.746371943s to wait for apiserver process to appear ...
	I1213 11:35:36.568115 1119052 api_server.go:88] waiting for apiserver healthz status ...
	I1213 11:35:36.568135 1119052 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1213 11:35:36.598687 1119052 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1213 11:35:36.598720 1119052 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1213 11:35:37.068245 1119052 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1213 11:35:37.076783 1119052 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1213 11:35:37.076830 1119052 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1213 11:35:37.568271 1119052 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1213 11:35:37.579707 1119052 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1213 11:35:37.579747 1119052 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1213 11:35:38.068252 1119052 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1213 11:35:38.079823 1119052 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1213 11:35:38.080933 1119052 api_server.go:141] control plane version: v1.34.2
	I1213 11:35:38.080980 1119052 api_server.go:131] duration metric: took 1.5128523s to wait for apiserver health ...
	I1213 11:35:38.080990 1119052 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 11:35:38.085191 1119052 system_pods.go:59] 7 kube-system pods found
	I1213 11:35:38.085235 1119052 system_pods.go:61] "coredns-66bc5c9577-zg2b2" [10caf02b-e875-43a4-889f-bf6c434d73dd] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 11:35:38.085247 1119052 system_pods.go:61] "etcd-pause-318241" [c1406768-08f7-46bf-a952-9df2246da639] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 11:35:38.085253 1119052 system_pods.go:61] "kindnet-cn6qx" [4ffe31da-1d55-434e-9821-30f3967fa9b5] Running
	I1213 11:35:38.085262 1119052 system_pods.go:61] "kube-apiserver-pause-318241" [fc56ce75-2daa-48d0-8ef6-2c389e85e550] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1213 11:35:38.085272 1119052 system_pods.go:61] "kube-controller-manager-pause-318241" [dedf20d0-fc79-4e2f-b316-fb14f00600dd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 11:35:38.085288 1119052 system_pods.go:61] "kube-proxy-89wjk" [6feb766a-9bbb-4051-9713-7b7104e86f7b] Running
	I1213 11:35:38.085299 1119052 system_pods.go:61] "kube-scheduler-pause-318241" [96c9bf0f-5d19-4b7b-9623-fb92d3d572fc] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 11:35:38.085306 1119052 system_pods.go:74] duration metric: took 4.308851ms to wait for pod list to return data ...
	I1213 11:35:38.085319 1119052 default_sa.go:34] waiting for default service account to be created ...
	I1213 11:35:38.088386 1119052 default_sa.go:45] found service account: "default"
	I1213 11:35:38.088409 1119052 default_sa.go:55] duration metric: took 3.084823ms for default service account to be created ...
	I1213 11:35:38.088417 1119052 system_pods.go:116] waiting for k8s-apps to be running ...
	I1213 11:35:38.091474 1119052 system_pods.go:86] 7 kube-system pods found
	I1213 11:35:38.091515 1119052 system_pods.go:89] "coredns-66bc5c9577-zg2b2" [10caf02b-e875-43a4-889f-bf6c434d73dd] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 11:35:38.091524 1119052 system_pods.go:89] "etcd-pause-318241" [c1406768-08f7-46bf-a952-9df2246da639] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 11:35:38.091530 1119052 system_pods.go:89] "kindnet-cn6qx" [4ffe31da-1d55-434e-9821-30f3967fa9b5] Running
	I1213 11:35:38.091536 1119052 system_pods.go:89] "kube-apiserver-pause-318241" [fc56ce75-2daa-48d0-8ef6-2c389e85e550] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1213 11:35:38.091543 1119052 system_pods.go:89] "kube-controller-manager-pause-318241" [dedf20d0-fc79-4e2f-b316-fb14f00600dd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 11:35:38.091547 1119052 system_pods.go:89] "kube-proxy-89wjk" [6feb766a-9bbb-4051-9713-7b7104e86f7b] Running
	I1213 11:35:38.091554 1119052 system_pods.go:89] "kube-scheduler-pause-318241" [96c9bf0f-5d19-4b7b-9623-fb92d3d572fc] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 11:35:38.091571 1119052 system_pods.go:126] duration metric: took 3.141717ms to wait for k8s-apps to be running ...
	I1213 11:35:38.091592 1119052 system_svc.go:44] waiting for kubelet service to be running ....
	I1213 11:35:38.091664 1119052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 11:35:38.108005 1119052 system_svc.go:56] duration metric: took 16.404414ms WaitForService to wait for kubelet
	I1213 11:35:38.108046 1119052 kubeadm.go:587] duration metric: took 6.286341749s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 11:35:38.108065 1119052 node_conditions.go:102] verifying NodePressure condition ...
	I1213 11:35:38.113657 1119052 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1213 11:35:38.113692 1119052 node_conditions.go:123] node cpu capacity is 2
	I1213 11:35:38.113706 1119052 node_conditions.go:105] duration metric: took 5.636446ms to run NodePressure ...
	I1213 11:35:38.113718 1119052 start.go:242] waiting for startup goroutines ...
	I1213 11:35:38.113732 1119052 start.go:247] waiting for cluster config update ...
	I1213 11:35:38.113748 1119052 start.go:256] writing updated cluster config ...
	I1213 11:35:38.114093 1119052 ssh_runner.go:195] Run: rm -f paused
	I1213 11:35:38.118390 1119052 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 11:35:38.119080 1119052 kapi.go:59] client config for pause-318241: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22128-904040/.minikube/profiles/pause-318241/client.crt", KeyFile:"/home/jenkins/minikube-integration/22128-904040/.minikube/profiles/pause-318241/client.key", CAFile:"/home/jenkins/minikube-integration/22128-904040/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 11:35:38.126994 1119052 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-zg2b2" in "kube-system" namespace to be "Ready" or be gone ...
	W1213 11:35:40.132373 1119052 pod_ready.go:104] pod "coredns-66bc5c9577-zg2b2" is not "Ready", error: <nil>
	W1213 11:35:42.134196 1119052 pod_ready.go:104] pod "coredns-66bc5c9577-zg2b2" is not "Ready", error: <nil>
	I1213 11:35:44.633056 1119052 pod_ready.go:94] pod "coredns-66bc5c9577-zg2b2" is "Ready"
	I1213 11:35:44.633086 1119052 pod_ready.go:86] duration metric: took 6.506052781s for pod "coredns-66bc5c9577-zg2b2" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 11:35:44.635755 1119052 pod_ready.go:83] waiting for pod "etcd-pause-318241" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 11:35:44.640548 1119052 pod_ready.go:94] pod "etcd-pause-318241" is "Ready"
	I1213 11:35:44.640577 1119052 pod_ready.go:86] duration metric: took 4.788102ms for pod "etcd-pause-318241" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 11:35:44.643123 1119052 pod_ready.go:83] waiting for pod "kube-apiserver-pause-318241" in "kube-system" namespace to be "Ready" or be gone ...
	W1213 11:35:46.648036 1119052 pod_ready.go:104] pod "kube-apiserver-pause-318241" is not "Ready", error: <nil>
	W1213 11:35:48.649188 1119052 pod_ready.go:104] pod "kube-apiserver-pause-318241" is not "Ready", error: <nil>
	I1213 11:35:50.148694 1119052 pod_ready.go:94] pod "kube-apiserver-pause-318241" is "Ready"
	I1213 11:35:50.148720 1119052 pod_ready.go:86] duration metric: took 5.505569752s for pod "kube-apiserver-pause-318241" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 11:35:50.151033 1119052 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-318241" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 11:35:50.155855 1119052 pod_ready.go:94] pod "kube-controller-manager-pause-318241" is "Ready"
	I1213 11:35:50.155887 1119052 pod_ready.go:86] duration metric: took 4.825148ms for pod "kube-controller-manager-pause-318241" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 11:35:50.158500 1119052 pod_ready.go:83] waiting for pod "kube-proxy-89wjk" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 11:35:50.163340 1119052 pod_ready.go:94] pod "kube-proxy-89wjk" is "Ready"
	I1213 11:35:50.163365 1119052 pod_ready.go:86] duration metric: took 4.793935ms for pod "kube-proxy-89wjk" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 11:35:50.166019 1119052 pod_ready.go:83] waiting for pod "kube-scheduler-pause-318241" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 11:35:50.430692 1119052 pod_ready.go:94] pod "kube-scheduler-pause-318241" is "Ready"
	I1213 11:35:50.430723 1119052 pod_ready.go:86] duration metric: took 264.681075ms for pod "kube-scheduler-pause-318241" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 11:35:50.430737 1119052 pod_ready.go:40] duration metric: took 12.312305208s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 11:35:50.484118 1119052 start.go:625] kubectl: 1.33.2, cluster: 1.34.2 (minor skew: 1)
	I1213 11:35:50.487315 1119052 out.go:179] * Done! kubectl is now configured to use "pause-318241" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 13 11:35:32 pause-318241 crio[2100]: time="2025-12-13T11:35:32.449157079Z" level=info msg="Created container e3b0a51b8f9711ceb369990f51edf8acc6d7eabc0f3b773df226f5baac8bd05a: kube-system/etcd-pause-318241/etcd" id=4fe7b47d-bdd7-4903-8577-dea117296086 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 11:35:32 pause-318241 crio[2100]: time="2025-12-13T11:35:32.450310166Z" level=info msg="Starting container: e3b0a51b8f9711ceb369990f51edf8acc6d7eabc0f3b773df226f5baac8bd05a" id=3418ffe3-1fce-49fc-b990-07f56852843a name=/runtime.v1.RuntimeService/StartContainer
	Dec 13 11:35:32 pause-318241 crio[2100]: time="2025-12-13T11:35:32.454433374Z" level=info msg="Started container" PID=2390 containerID=2c5fd5eef06a9d7b5663c2e3b869895d71b8131c79925bb9982948c1a5f19c3c description=kube-system/coredns-66bc5c9577-zg2b2/coredns id=a9ff23b3-3cc0-48f8-ae6c-dc1a5655f98c name=/runtime.v1.RuntimeService/StartContainer sandboxID=ebcf9d1d4a0be828b273e5a4033222bd2207adad074789a4869b1562f722cac4
	Dec 13 11:35:32 pause-318241 crio[2100]: time="2025-12-13T11:35:32.463933292Z" level=info msg="Started container" PID=2400 containerID=e3b0a51b8f9711ceb369990f51edf8acc6d7eabc0f3b773df226f5baac8bd05a description=kube-system/etcd-pause-318241/etcd id=3418ffe3-1fce-49fc-b990-07f56852843a name=/runtime.v1.RuntimeService/StartContainer sandboxID=b801179981ad68000ddc3231258864349d7e708b2a31a2909728e2e23dddb2a0
	Dec 13 11:35:32 pause-318241 crio[2100]: time="2025-12-13T11:35:32.471556244Z" level=info msg="Created container 162230a9828f17339fad434e48c14a2c32bfc3edd1fdd6bb4e94598a1483bdb1: kube-system/kube-controller-manager-pause-318241/kube-controller-manager" id=5216e7f1-605b-48be-a9aa-2e398250688e name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 11:35:32 pause-318241 crio[2100]: time="2025-12-13T11:35:32.472144246Z" level=info msg="Starting container: 162230a9828f17339fad434e48c14a2c32bfc3edd1fdd6bb4e94598a1483bdb1" id=00225c47-380b-40a3-be47-c809d1e1d69e name=/runtime.v1.RuntimeService/StartContainer
	Dec 13 11:35:32 pause-318241 crio[2100]: time="2025-12-13T11:35:32.485089188Z" level=info msg="Started container" PID=2406 containerID=162230a9828f17339fad434e48c14a2c32bfc3edd1fdd6bb4e94598a1483bdb1 description=kube-system/kube-controller-manager-pause-318241/kube-controller-manager id=00225c47-380b-40a3-be47-c809d1e1d69e name=/runtime.v1.RuntimeService/StartContainer sandboxID=d36a48f2bb5b58c6b3940026dc2a98d13fe977627e5fb318f1aec1a4326b18a9
	Dec 13 11:35:32 pause-318241 crio[2100]: time="2025-12-13T11:35:32.610077142Z" level=info msg="Created container 100faf4b827044c1b3d999090e8900ba9801e797318aecfb111a8a37c5461121: kube-system/kube-proxy-89wjk/kube-proxy" id=a21872a9-76f1-4e2e-b666-8b25653e8ada name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 11:35:32 pause-318241 crio[2100]: time="2025-12-13T11:35:32.610709403Z" level=info msg="Starting container: 100faf4b827044c1b3d999090e8900ba9801e797318aecfb111a8a37c5461121" id=9be1798e-891e-4fc8-9755-ef635483a7ca name=/runtime.v1.RuntimeService/StartContainer
	Dec 13 11:35:32 pause-318241 crio[2100]: time="2025-12-13T11:35:32.613129357Z" level=info msg="Started container" PID=2436 containerID=100faf4b827044c1b3d999090e8900ba9801e797318aecfb111a8a37c5461121 description=kube-system/kube-proxy-89wjk/kube-proxy id=9be1798e-891e-4fc8-9755-ef635483a7ca name=/runtime.v1.RuntimeService/StartContainer sandboxID=4bc3bf70ee1abd2d653880402b8164fb27551a22eb66ed7301bb00e5495c8da7
	Dec 13 11:35:42 pause-318241 crio[2100]: time="2025-12-13T11:35:42.727090469Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 13 11:35:42 pause-318241 crio[2100]: time="2025-12-13T11:35:42.73152308Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 13 11:35:42 pause-318241 crio[2100]: time="2025-12-13T11:35:42.731558223Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 13 11:35:42 pause-318241 crio[2100]: time="2025-12-13T11:35:42.731580516Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 13 11:35:42 pause-318241 crio[2100]: time="2025-12-13T11:35:42.734840824Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 13 11:35:42 pause-318241 crio[2100]: time="2025-12-13T11:35:42.734888143Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 13 11:35:42 pause-318241 crio[2100]: time="2025-12-13T11:35:42.73491111Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 13 11:35:42 pause-318241 crio[2100]: time="2025-12-13T11:35:42.73844008Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 13 11:35:42 pause-318241 crio[2100]: time="2025-12-13T11:35:42.738476224Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 13 11:35:42 pause-318241 crio[2100]: time="2025-12-13T11:35:42.738499215Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 13 11:35:42 pause-318241 crio[2100]: time="2025-12-13T11:35:42.741973457Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 13 11:35:42 pause-318241 crio[2100]: time="2025-12-13T11:35:42.742014475Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 13 11:35:42 pause-318241 crio[2100]: time="2025-12-13T11:35:42.742039862Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 13 11:35:42 pause-318241 crio[2100]: time="2025-12-13T11:35:42.745404983Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 13 11:35:42 pause-318241 crio[2100]: time="2025-12-13T11:35:42.745447035Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	100faf4b82704       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786   21 seconds ago       Running             kube-proxy                1                   4bc3bf70ee1ab       kube-proxy-89wjk                       kube-system
	162230a9828f1       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2   21 seconds ago       Running             kube-controller-manager   1                   d36a48f2bb5b5       kube-controller-manager-pause-318241   kube-system
	e3b0a51b8f971       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42   21 seconds ago       Running             etcd                      1                   b801179981ad6       etcd-pause-318241                      kube-system
	092092a5e6fbf       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7   21 seconds ago       Running             kube-apiserver            1                   7aea9716b33be       kube-apiserver-pause-318241            kube-system
	431e449a50a15       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949   21 seconds ago       Running             kube-scheduler            1                   ae12b00f0c7d0       kube-scheduler-pause-318241            kube-system
	2c5fd5eef06a9       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   21 seconds ago       Running             coredns                   1                   ebcf9d1d4a0be       coredns-66bc5c9577-zg2b2               kube-system
	d09d707c6449d       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   21 seconds ago       Running             kindnet-cni               1                   58f0d6c82b89e       kindnet-cn6qx                          kube-system
	c9267d0dc5c98       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   34 seconds ago       Exited              coredns                   0                   ebcf9d1d4a0be       coredns-66bc5c9577-zg2b2               kube-system
	5e89137797c50       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786   About a minute ago   Exited              kube-proxy                0                   4bc3bf70ee1ab       kube-proxy-89wjk                       kube-system
	9c006bdb41bb7       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               0                   58f0d6c82b89e       kindnet-cn6qx                          kube-system
	eecbb13a8e76e       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42   About a minute ago   Exited              etcd                      0                   b801179981ad6       etcd-pause-318241                      kube-system
	3c425b435a3a8       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949   About a minute ago   Exited              kube-scheduler            0                   ae12b00f0c7d0       kube-scheduler-pause-318241            kube-system
	48ff4e8f7de76       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2   About a minute ago   Exited              kube-controller-manager   0                   d36a48f2bb5b5       kube-controller-manager-pause-318241   kube-system
	9297d505fbac5       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7   About a minute ago   Exited              kube-apiserver            0                   7aea9716b33be       kube-apiserver-pause-318241            kube-system
	
	
	==> coredns [2c5fd5eef06a9d7b5663c2e3b869895d71b8131c79925bb9982948c1a5f19c3c] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:44284 - 62561 "HINFO IN 6452898302968458127.7567510728403302696. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.034839298s
	
	
	==> coredns [c9267d0dc5c9802552bb682305aef31054c52ed0d89ffe2c3f55b46be31c0a61] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:38091 - 34453 "HINFO IN 1830135088201327870.2700890322571675326. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.024041183s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-318241
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-318241
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fb16b7642350f383695d44d1e88d7327f6f14453
	                    minikube.k8s.io/name=pause-318241
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_13T11_34_34_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 13 Dec 2025 11:34:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-318241
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 13 Dec 2025 11:35:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 13 Dec 2025 11:35:46 +0000   Sat, 13 Dec 2025 11:34:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 13 Dec 2025 11:35:46 +0000   Sat, 13 Dec 2025 11:34:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 13 Dec 2025 11:35:46 +0000   Sat, 13 Dec 2025 11:34:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 13 Dec 2025 11:35:46 +0000   Sat, 13 Dec 2025 11:35:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-318241
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 78f85184c267cd52312ad0096937f858
	  System UUID:                5c979713-b6da-4228-ac0e-da304970a9da
	  Boot ID:                    ff73813c-a05d-46ba-ba43-f4a4c3dc42b1
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-zg2b2                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     76s
	  kube-system                 etcd-pause-318241                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         81s
	  kube-system                 kindnet-cn6qx                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      76s
	  kube-system                 kube-apiserver-pause-318241             250m (12%)    0 (0%)      0 (0%)           0 (0%)         81s
	  kube-system                 kube-controller-manager-pause-318241    200m (10%)    0 (0%)      0 (0%)           0 (0%)         81s
	  kube-system                 kube-proxy-89wjk                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         76s
	  kube-system                 kube-scheduler-pause-318241             100m (5%)     0 (0%)      0 (0%)           0 (0%)         81s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 75s                kube-proxy       
	  Normal   Starting                 16s                kube-proxy       
	  Normal   NodeHasSufficientPID     89s (x8 over 89s)  kubelet          Node pause-318241 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 89s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  89s (x8 over 89s)  kubelet          Node pause-318241 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    89s (x8 over 89s)  kubelet          Node pause-318241 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 89s                kubelet          Starting kubelet.
	  Normal   Starting                 81s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 81s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  81s                kubelet          Node pause-318241 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    81s                kubelet          Node pause-318241 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     81s                kubelet          Node pause-318241 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           77s                node-controller  Node pause-318241 event: Registered Node pause-318241 in Controller
	  Normal   NodeReady                35s                kubelet          Node pause-318241 status is now: NodeReady
	  Normal   RegisteredNode           15s                node-controller  Node pause-318241 event: Registered Node pause-318241 in Controller
	
	
	==> dmesg <==
	[Dec13 10:59] overlayfs: idmapped layers are currently not supported
	[Dec13 11:00] overlayfs: idmapped layers are currently not supported
	[Dec13 11:01] overlayfs: idmapped layers are currently not supported
	[  +3.910612] overlayfs: idmapped layers are currently not supported
	[Dec13 11:02] overlayfs: idmapped layers are currently not supported
	[Dec13 11:03] overlayfs: idmapped layers are currently not supported
	[Dec13 11:04] overlayfs: idmapped layers are currently not supported
	[Dec13 11:09] overlayfs: idmapped layers are currently not supported
	[ +31.625971] overlayfs: idmapped layers are currently not supported
	[Dec13 11:10] overlayfs: idmapped layers are currently not supported
	[Dec13 11:12] overlayfs: idmapped layers are currently not supported
	[Dec13 11:13] overlayfs: idmapped layers are currently not supported
	[Dec13 11:14] overlayfs: idmapped layers are currently not supported
	[Dec13 11:15] overlayfs: idmapped layers are currently not supported
	[  +7.705175] overlayfs: idmapped layers are currently not supported
	[Dec13 11:16] overlayfs: idmapped layers are currently not supported
	[ +26.259109] overlayfs: idmapped layers are currently not supported
	[Dec13 11:17] overlayfs: idmapped layers are currently not supported
	[ +22.550073] overlayfs: idmapped layers are currently not supported
	[Dec13 11:18] overlayfs: idmapped layers are currently not supported
	[Dec13 11:20] overlayfs: idmapped layers are currently not supported
	[Dec13 11:22] overlayfs: idmapped layers are currently not supported
	[Dec13 11:23] overlayfs: idmapped layers are currently not supported
	[Dec13 11:31] kauditd_printk_skb: 8 callbacks suppressed
	[Dec13 11:34] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [e3b0a51b8f9711ceb369990f51edf8acc6d7eabc0f3b773df226f5baac8bd05a] <==
	{"level":"warn","ts":"2025-12-13T11:35:34.559966Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50376","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:35:34.585907Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:35:34.607237Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:35:34.631782Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:35:34.646001Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50440","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:35:34.662206Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:35:34.686508Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50494","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:35:34.700747Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50516","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:35:34.747379Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:35:34.767215Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:35:34.788913Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:35:34.811222Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50600","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:35:34.826791Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:35:34.856683Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:35:34.877962Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:35:34.893789Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50692","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:35:34.921687Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:35:34.947170Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:35:34.965367Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50758","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:35:34.998383Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:35:35.045835Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:35:35.054923Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:35:35.079588Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50824","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:35:35.100500Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50846","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:35:35.193985Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50862","server-name":"","error":"EOF"}
	
	
	==> etcd [eecbb13a8e76e6a23f875edf9d960195425e9af81b62178678927a293d4850bc] <==
	{"level":"warn","ts":"2025-12-13T11:34:29.192220Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35682","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:34:29.211272Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:34:29.231143Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35732","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:34:29.252782Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35756","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:34:29.269827Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:34:29.284307Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35796","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:34:29.353757Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35818","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-13T11:35:23.630198Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-13T11:35:23.630278Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-318241","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	{"level":"error","ts":"2025-12-13T11:35:23.630509Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-13T11:35:23.783320Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-13T11:35:23.783428Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-13T11:35:23.783453Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9f0758e1c58a86ed","current-leader-member-id":"9f0758e1c58a86ed"}
	{"level":"info","ts":"2025-12-13T11:35:23.783540Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-12-13T11:35:23.783557Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-12-13T11:35:23.783606Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-13T11:35:23.783674Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-13T11:35:23.783727Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-13T11:35:23.783805Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-13T11:35:23.783828Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-13T11:35:23.783836Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-13T11:35:23.786818Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"error","ts":"2025-12-13T11:35:23.786904Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-13T11:35:23.786934Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-13T11:35:23.786949Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-318241","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	
	
	==> kernel <==
	 11:35:54 up  6:18,  0 user,  load average: 1.71, 1.36, 1.58
	Linux pause-318241 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [9c006bdb41bb7a06898f7b334f571f0ac179e8b67a52a16eb6af04c6f6fa60c3] <==
	I1213 11:34:38.712506       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1213 11:34:38.713581       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1213 11:34:38.713728       1 main.go:148] setting mtu 1500 for CNI 
	I1213 11:34:38.713740       1 main.go:178] kindnetd IP family: "ipv4"
	I1213 11:34:38.713752       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-13T11:34:38Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1213 11:34:38.913166       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1213 11:34:38.913185       1 controller.go:381] "Waiting for informer caches to sync"
	I1213 11:34:38.913193       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1213 11:34:38.913517       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1213 11:35:08.912768       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1213 11:35:08.913731       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1213 11:35:08.913748       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1213 11:35:08.913836       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1213 11:35:10.313344       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1213 11:35:10.313448       1 metrics.go:72] Registering metrics
	I1213 11:35:10.313523       1 controller.go:711] "Syncing nftables rules"
	I1213 11:35:18.918102       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1213 11:35:18.918156       1 main.go:301] handling current node
	
	
	==> kindnet [d09d707c6449d5c8655c76e36cdbd8a6b6047eb6ae5c89fc89a58a57e3ee51fe] <==
	I1213 11:35:32.525293       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1213 11:35:32.535355       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1213 11:35:32.536636       1 main.go:148] setting mtu 1500 for CNI 
	I1213 11:35:32.536702       1 main.go:178] kindnetd IP family: "ipv4"
	I1213 11:35:32.536741       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-13T11:35:32Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1213 11:35:32.730711       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1213 11:35:32.730817       1 controller.go:381] "Waiting for informer caches to sync"
	I1213 11:35:32.730853       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1213 11:35:32.731214       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1213 11:35:36.631164       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1213 11:35:36.631270       1 metrics.go:72] Registering metrics
	I1213 11:35:36.631375       1 controller.go:711] "Syncing nftables rules"
	I1213 11:35:42.726584       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1213 11:35:42.726742       1 main.go:301] handling current node
	I1213 11:35:52.727064       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1213 11:35:52.727098       1 main.go:301] handling current node
	
	
	==> kube-apiserver [092092a5e6fbfc1509140795d2624c91baaa2f8e8aca4835a3c725f7a0a68236] <==
	I1213 11:35:36.487363       1 aggregator.go:171] initial CRD sync complete...
	I1213 11:35:36.487392       1 autoregister_controller.go:144] Starting autoregister controller
	I1213 11:35:36.487399       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1213 11:35:36.487406       1 cache.go:39] Caches are synced for autoregister controller
	I1213 11:35:36.489300       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1213 11:35:36.489517       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1213 11:35:36.489549       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1213 11:35:36.489566       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1213 11:35:36.503015       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1213 11:35:36.503782       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1213 11:35:36.514536       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1213 11:35:36.514723       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1213 11:35:36.515347       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1213 11:35:36.522457       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1213 11:35:36.587356       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1213 11:35:36.587865       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	E1213 11:35:36.602347       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1213 11:35:36.634275       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1213 11:35:36.634337       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1213 11:35:37.091984       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1213 11:35:37.815634       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1213 11:35:39.169464       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1213 11:35:39.467737       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1213 11:35:39.517304       1 controller.go:667] quota admission added evaluator for: endpoints
	I1213 11:35:39.569158       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-apiserver [9297d505fbac54a9bf63d26ce91114ed0f7da1e1f2f1cbb3bbd9de20d75ecec8] <==
	W1213 11:35:23.657872       1 logging.go:55] [core] [Channel #99 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 11:35:23.657961       1 logging.go:55] [core] [Channel #39 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 11:35:23.658017       1 logging.go:55] [core] [Channel #167 SubChannel #169]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 11:35:23.658071       1 logging.go:55] [core] [Channel #43 SubChannel #45]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 11:35:23.658121       1 logging.go:55] [core] [Channel #63 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 11:35:23.658169       1 logging.go:55] [core] [Channel #131 SubChannel #133]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 11:35:23.658218       1 logging.go:55] [core] [Channel #115 SubChannel #117]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 11:35:23.658268       1 logging.go:55] [core] [Channel #139 SubChannel #141]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 11:35:23.658317       1 logging.go:55] [core] [Channel #179 SubChannel #181]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 11:35:23.658367       1 logging.go:55] [core] [Channel #203 SubChannel #205]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 11:35:23.658419       1 logging.go:55] [core] [Channel #135 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 11:35:23.658471       1 logging.go:55] [core] [Channel #31 SubChannel #33]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 11:35:23.658519       1 logging.go:55] [core] [Channel #223 SubChannel #225]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 11:35:23.658572       1 logging.go:55] [core] [Channel #227 SubChannel #229]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 11:35:23.658623       1 logging.go:55] [core] [Channel #251 SubChannel #253]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 11:35:23.658673       1 logging.go:55] [core] [Channel #59 SubChannel #61]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 11:35:23.658724       1 logging.go:55] [core] [Channel #119 SubChannel #121]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 11:35:23.658773       1 logging.go:55] [core] [Channel #155 SubChannel #157]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 11:35:23.658825       1 logging.go:55] [core] [Channel #13 SubChannel #15]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 11:35:23.658908       1 logging.go:55] [core] [Channel #127 SubChannel #129]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 11:35:23.658962       1 logging.go:55] [core] [Channel #183 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 11:35:23.659012       1 logging.go:55] [core] [Channel #231 SubChannel #233]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 11:35:23.659148       1 logging.go:55] [core] [Channel #103 SubChannel #105]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 11:35:23.659220       1 logging.go:55] [core] [Channel #47 SubChannel #49]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [162230a9828f17339fad434e48c14a2c32bfc3edd1fdd6bb4e94598a1483bdb1] <==
	I1213 11:35:39.168956       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1213 11:35:39.170347       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1213 11:35:39.171540       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1213 11:35:39.174027       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1213 11:35:39.176124       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1213 11:35:39.177305       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1213 11:35:39.181724       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1213 11:35:39.181746       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1213 11:35:39.181752       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1213 11:35:39.183697       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1213 11:35:39.185883       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1213 11:35:39.187284       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1213 11:35:39.196153       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1213 11:35:39.196266       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1213 11:35:39.196347       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-318241"
	I1213 11:35:39.196394       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1213 11:35:39.196581       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1213 11:35:39.199084       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1213 11:35:39.200662       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1213 11:35:39.211341       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1213 11:35:39.211428       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1213 11:35:39.211848       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1213 11:35:39.213000       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1213 11:35:39.214194       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1213 11:35:39.216372       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	
	
	==> kube-controller-manager [48ff4e8f7de76df1c150e6930beee122666c25feafc8e396c52b55a20cfc961c] <==
	I1213 11:34:37.154618       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-318241"
	I1213 11:34:37.154678       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1213 11:34:37.155207       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1213 11:34:37.155422       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1213 11:34:37.155812       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1213 11:34:37.162059       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1213 11:34:37.168072       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-318241" podCIDRs=["10.244.0.0/24"]
	I1213 11:34:37.168147       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1213 11:34:37.168276       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1213 11:34:37.177598       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1213 11:34:37.182162       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1213 11:34:37.185886       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1213 11:34:37.200256       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1213 11:34:37.201266       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1213 11:34:37.205342       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1213 11:34:37.205850       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1213 11:34:37.205934       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1213 11:34:37.206139       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1213 11:34:37.206184       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1213 11:34:37.206213       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1213 11:34:37.206320       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1213 11:34:37.206454       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1213 11:34:37.206538       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1213 11:34:37.206793       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1213 11:35:22.160283       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [100faf4b827044c1b3d999090e8900ba9801e797318aecfb111a8a37c5461121] <==
	I1213 11:35:36.763084       1 server_linux.go:53] "Using iptables proxy"
	I1213 11:35:37.103749       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1213 11:35:37.208396       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1213 11:35:37.208448       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1213 11:35:37.208539       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1213 11:35:37.609682       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1213 11:35:37.609821       1 server_linux.go:132] "Using iptables Proxier"
	I1213 11:35:37.628017       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1213 11:35:37.628390       1 server.go:527] "Version info" version="v1.34.2"
	I1213 11:35:37.628455       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 11:35:37.642733       1 config.go:200] "Starting service config controller"
	I1213 11:35:37.642765       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1213 11:35:37.642814       1 config.go:106] "Starting endpoint slice config controller"
	I1213 11:35:37.642820       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1213 11:35:37.642834       1 config.go:403] "Starting serviceCIDR config controller"
	I1213 11:35:37.642844       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1213 11:35:37.644148       1 config.go:309] "Starting node config controller"
	I1213 11:35:37.644170       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1213 11:35:37.644178       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1213 11:35:37.743397       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1213 11:35:37.743512       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1213 11:35:37.743586       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [5e89137797c50fc5478141518a43dca331d9631eb6f03d208f10aa436870f230] <==
	I1213 11:34:38.678453       1 server_linux.go:53] "Using iptables proxy"
	I1213 11:34:38.875057       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1213 11:34:38.975785       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1213 11:34:38.975818       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1213 11:34:38.975885       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1213 11:34:39.043797       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1213 11:34:39.043862       1 server_linux.go:132] "Using iptables Proxier"
	I1213 11:34:39.050766       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1213 11:34:39.051056       1 server.go:527] "Version info" version="v1.34.2"
	I1213 11:34:39.051076       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 11:34:39.053248       1 config.go:200] "Starting service config controller"
	I1213 11:34:39.053266       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1213 11:34:39.053279       1 config.go:106] "Starting endpoint slice config controller"
	I1213 11:34:39.053284       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1213 11:34:39.053299       1 config.go:403] "Starting serviceCIDR config controller"
	I1213 11:34:39.053303       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1213 11:34:39.054089       1 config.go:309] "Starting node config controller"
	I1213 11:34:39.054158       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1213 11:34:39.054186       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1213 11:34:39.153836       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1213 11:34:39.153845       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1213 11:34:39.153863       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [3c425b435a3a8e11b491b65d52eeed23fc1ae1c63629808d7746d91310d263e1] <==
	E1213 11:34:30.196004       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1213 11:34:30.196100       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1213 11:34:30.196168       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1213 11:34:31.001786       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1213 11:34:31.022659       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1213 11:34:31.078766       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1213 11:34:31.140005       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1213 11:34:31.176534       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1213 11:34:31.194410       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1213 11:34:31.195298       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1213 11:34:31.197640       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1213 11:34:31.221261       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1213 11:34:31.333952       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1213 11:34:31.377356       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1213 11:34:31.390747       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1213 11:34:31.424317       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1213 11:34:31.448653       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1213 11:34:31.482442       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	I1213 11:34:33.180188       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1213 11:35:23.617366       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1213 11:35:23.617529       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1213 11:35:23.617726       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1213 11:35:23.617760       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1213 11:35:23.623729       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1213 11:35:23.623769       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [431e449a50a15c97ddcd5984e57f5efe1f17a676daffe8932f907551ab539972] <==
	I1213 11:35:36.646831       1 serving.go:386] Generated self-signed cert in-memory
	I1213 11:35:38.161886       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1213 11:35:38.162297       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 11:35:38.170451       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1213 11:35:38.170626       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1213 11:35:38.170689       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1213 11:35:38.170748       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1213 11:35:38.172358       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1213 11:35:38.173595       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1213 11:35:38.173702       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1213 11:35:38.173736       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1213 11:35:38.271689       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1213 11:35:38.273766       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1213 11:35:38.274595       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Dec 13 11:35:32 pause-318241 kubelet[1339]: E1213 11:35:32.300792    1339 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-318241\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="55a327512e1f31b42d1108b407175237" pod="kube-system/kube-scheduler-pause-318241"
	Dec 13 11:35:32 pause-318241 kubelet[1339]: E1213 11:35:32.301147    1339 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-318241\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="25cd5dd8fe5b10afb6cc108fece163fd" pod="kube-system/etcd-pause-318241"
	Dec 13 11:35:32 pause-318241 kubelet[1339]: E1213 11:35:32.301380    1339 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-318241\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="bd0fdbee19a0f8584b5bffca5b0b933e" pod="kube-system/kube-controller-manager-pause-318241"
	Dec 13 11:35:32 pause-318241 kubelet[1339]: E1213 11:35:32.301808    1339 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-cn6qx\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="4ffe31da-1d55-434e-9821-30f3967fa9b5" pod="kube-system/kindnet-cn6qx"
	Dec 13 11:35:32 pause-318241 kubelet[1339]: I1213 11:35:32.314884    1339 scope.go:117] "RemoveContainer" containerID="9297d505fbac54a9bf63d26ce91114ed0f7da1e1f2f1cbb3bbd9de20d75ecec8"
	Dec 13 11:35:32 pause-318241 kubelet[1339]: E1213 11:35:32.315547    1339 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-318241\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="55a327512e1f31b42d1108b407175237" pod="kube-system/kube-scheduler-pause-318241"
	Dec 13 11:35:32 pause-318241 kubelet[1339]: E1213 11:35:32.315814    1339 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-318241\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="25cd5dd8fe5b10afb6cc108fece163fd" pod="kube-system/etcd-pause-318241"
	Dec 13 11:35:32 pause-318241 kubelet[1339]: E1213 11:35:32.315982    1339 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-318241\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="bd0fdbee19a0f8584b5bffca5b0b933e" pod="kube-system/kube-controller-manager-pause-318241"
	Dec 13 11:35:32 pause-318241 kubelet[1339]: E1213 11:35:32.316135    1339 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-cn6qx\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="4ffe31da-1d55-434e-9821-30f3967fa9b5" pod="kube-system/kindnet-cn6qx"
	Dec 13 11:35:32 pause-318241 kubelet[1339]: E1213 11:35:32.316349    1339 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-zg2b2\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="10caf02b-e875-43a4-889f-bf6c434d73dd" pod="kube-system/coredns-66bc5c9577-zg2b2"
	Dec 13 11:35:32 pause-318241 kubelet[1339]: E1213 11:35:32.316498    1339 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-318241\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="c9721d581b19638a792d8b55bb352970" pod="kube-system/kube-apiserver-pause-318241"
	Dec 13 11:35:32 pause-318241 kubelet[1339]: I1213 11:35:32.335075    1339 scope.go:117] "RemoveContainer" containerID="5e89137797c50fc5478141518a43dca331d9631eb6f03d208f10aa436870f230"
	Dec 13 11:35:32 pause-318241 kubelet[1339]: E1213 11:35:32.335806    1339 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-318241\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="bd0fdbee19a0f8584b5bffca5b0b933e" pod="kube-system/kube-controller-manager-pause-318241"
	Dec 13 11:35:32 pause-318241 kubelet[1339]: E1213 11:35:32.336171    1339 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-cn6qx\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="4ffe31da-1d55-434e-9821-30f3967fa9b5" pod="kube-system/kindnet-cn6qx"
	Dec 13 11:35:32 pause-318241 kubelet[1339]: E1213 11:35:32.336382    1339 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-89wjk\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="6feb766a-9bbb-4051-9713-7b7104e86f7b" pod="kube-system/kube-proxy-89wjk"
	Dec 13 11:35:32 pause-318241 kubelet[1339]: E1213 11:35:32.336725    1339 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-zg2b2\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="10caf02b-e875-43a4-889f-bf6c434d73dd" pod="kube-system/coredns-66bc5c9577-zg2b2"
	Dec 13 11:35:32 pause-318241 kubelet[1339]: E1213 11:35:32.336932    1339 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-318241\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="c9721d581b19638a792d8b55bb352970" pod="kube-system/kube-apiserver-pause-318241"
	Dec 13 11:35:32 pause-318241 kubelet[1339]: E1213 11:35:32.337265    1339 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-318241\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="55a327512e1f31b42d1108b407175237" pod="kube-system/kube-scheduler-pause-318241"
	Dec 13 11:35:32 pause-318241 kubelet[1339]: E1213 11:35:32.337446    1339 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-318241\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="25cd5dd8fe5b10afb6cc108fece163fd" pod="kube-system/etcd-pause-318241"
	Dec 13 11:35:36 pause-318241 kubelet[1339]: E1213 11:35:36.429898    1339 reflector.go:205] "Failed to watch" err="configmaps \"kube-proxy\" is forbidden: User \"system:node:pause-318241\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-318241' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Dec 13 11:35:36 pause-318241 kubelet[1339]: E1213 11:35:36.430688    1339 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-318241\" is forbidden: User \"system:node:pause-318241\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-318241' and this object" podUID="bd0fdbee19a0f8584b5bffca5b0b933e" pod="kube-system/kube-controller-manager-pause-318241"
	Dec 13 11:35:43 pause-318241 kubelet[1339]: W1213 11:35:43.341481    1339 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Dec 13 11:35:50 pause-318241 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 13 11:35:51 pause-318241 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 13 11:35:51 pause-318241 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-318241 -n pause-318241
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-318241 -n pause-318241: exit status 2 (394.817815ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context pause-318241 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect pause-318241
helpers_test.go:244: (dbg) docker inspect pause-318241:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "99a18544a4b1fc4ef86e81626534631c38ee091240b7dfc3c539fc1894dee201",
	        "Created": "2025-12-13T11:34:05.040362913Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1116449,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T11:34:05.10899505Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/99a18544a4b1fc4ef86e81626534631c38ee091240b7dfc3c539fc1894dee201/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/99a18544a4b1fc4ef86e81626534631c38ee091240b7dfc3c539fc1894dee201/hostname",
	        "HostsPath": "/var/lib/docker/containers/99a18544a4b1fc4ef86e81626534631c38ee091240b7dfc3c539fc1894dee201/hosts",
	        "LogPath": "/var/lib/docker/containers/99a18544a4b1fc4ef86e81626534631c38ee091240b7dfc3c539fc1894dee201/99a18544a4b1fc4ef86e81626534631c38ee091240b7dfc3c539fc1894dee201-json.log",
	        "Name": "/pause-318241",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-318241:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-318241",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "99a18544a4b1fc4ef86e81626534631c38ee091240b7dfc3c539fc1894dee201",
	                "LowerDir": "/var/lib/docker/overlay2/d25e78a84d4d4de9fba3461ab1d6129284929ad56d829118383bc43a18453ede-init/diff:/var/lib/docker/overlay2/ae644fe0cc2841f5eea1cee1fab5fa62406b5368ff2c4f1e7ca42815e94a37ad/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d25e78a84d4d4de9fba3461ab1d6129284929ad56d829118383bc43a18453ede/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d25e78a84d4d4de9fba3461ab1d6129284929ad56d829118383bc43a18453ede/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d25e78a84d4d4de9fba3461ab1d6129284929ad56d829118383bc43a18453ede/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-318241",
	                "Source": "/var/lib/docker/volumes/pause-318241/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-318241",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-318241",
	                "name.minikube.sigs.k8s.io": "pause-318241",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1d9431fa8955751714218c6240013195119174924fb95e8531af90275817bcdb",
	            "SandboxKey": "/var/run/docker/netns/1d9431fa8955",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-318241": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "7a:6e:ef:a8:c4:6b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6936a9e18475ad8505cbd530811f4f4957e48bb9c3fdaa7604b330d6ce314f4e",
	                    "EndpointID": "44ec637ac2e8e61997ab22ae33e9a005e9715815ddb7cfc7e813a0fd2b163d12",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-318241",
	                        "99a18544a4b1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-318241 -n pause-318241
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-318241 -n pause-318241: exit status 2 (371.669153ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p pause-318241 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p pause-318241 logs -n 25: (1.398577886s)
helpers_test.go:261: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                      ARGS                                                                       │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p NoKubernetes-885378 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                           │ NoKubernetes-885378       │ jenkins │ v1.37.0 │ 13 Dec 25 11:21 UTC │ 13 Dec 25 11:22 UTC │
	│ start   │ -p missing-upgrade-828630 --memory=3072 --driver=docker  --container-runtime=crio                                                               │ missing-upgrade-828630    │ jenkins │ v1.35.0 │ 13 Dec 25 11:21 UTC │ 13 Dec 25 11:22 UTC │
	│ start   │ -p NoKubernetes-885378 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                           │ NoKubernetes-885378       │ jenkins │ v1.37.0 │ 13 Dec 25 11:22 UTC │ 13 Dec 25 11:22 UTC │
	│ delete  │ -p NoKubernetes-885378                                                                                                                          │ NoKubernetes-885378       │ jenkins │ v1.37.0 │ 13 Dec 25 11:22 UTC │ 13 Dec 25 11:22 UTC │
	│ start   │ -p NoKubernetes-885378 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                           │ NoKubernetes-885378       │ jenkins │ v1.37.0 │ 13 Dec 25 11:22 UTC │ 13 Dec 25 11:22 UTC │
	│ ssh     │ -p NoKubernetes-885378 sudo systemctl is-active --quiet service kubelet                                                                         │ NoKubernetes-885378       │ jenkins │ v1.37.0 │ 13 Dec 25 11:22 UTC │                     │
	│ stop    │ -p NoKubernetes-885378                                                                                                                          │ NoKubernetes-885378       │ jenkins │ v1.37.0 │ 13 Dec 25 11:22 UTC │ 13 Dec 25 11:22 UTC │
	│ start   │ -p NoKubernetes-885378 --driver=docker  --container-runtime=crio                                                                                │ NoKubernetes-885378       │ jenkins │ v1.37.0 │ 13 Dec 25 11:22 UTC │ 13 Dec 25 11:22 UTC │
	│ start   │ -p missing-upgrade-828630 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                        │ missing-upgrade-828630    │ jenkins │ v1.37.0 │ 13 Dec 25 11:22 UTC │ 13 Dec 25 11:23 UTC │
	│ ssh     │ -p NoKubernetes-885378 sudo systemctl is-active --quiet service kubelet                                                                         │ NoKubernetes-885378       │ jenkins │ v1.37.0 │ 13 Dec 25 11:22 UTC │                     │
	│ delete  │ -p NoKubernetes-885378                                                                                                                          │ NoKubernetes-885378       │ jenkins │ v1.37.0 │ 13 Dec 25 11:22 UTC │ 13 Dec 25 11:22 UTC │
	│ start   │ -p kubernetes-upgrade-060355 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio        │ kubernetes-upgrade-060355 │ jenkins │ v1.37.0 │ 13 Dec 25 11:22 UTC │ 13 Dec 25 11:23 UTC │
	│ stop    │ -p kubernetes-upgrade-060355                                                                                                                    │ kubernetes-upgrade-060355 │ jenkins │ v1.37.0 │ 13 Dec 25 11:23 UTC │ 13 Dec 25 11:23 UTC │
	│ start   │ -p kubernetes-upgrade-060355 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-060355 │ jenkins │ v1.37.0 │ 13 Dec 25 11:23 UTC │                     │
	│ delete  │ -p missing-upgrade-828630                                                                                                                       │ missing-upgrade-828630    │ jenkins │ v1.37.0 │ 13 Dec 25 11:23 UTC │ 13 Dec 25 11:23 UTC │
	│ start   │ -p stopped-upgrade-443186 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                            │ stopped-upgrade-443186    │ jenkins │ v1.35.0 │ 13 Dec 25 11:23 UTC │ 13 Dec 25 11:24 UTC │
	│ stop    │ stopped-upgrade-443186 stop                                                                                                                     │ stopped-upgrade-443186    │ jenkins │ v1.35.0 │ 13 Dec 25 11:24 UTC │ 13 Dec 25 11:24 UTC │
	│ start   │ -p stopped-upgrade-443186 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                        │ stopped-upgrade-443186    │ jenkins │ v1.37.0 │ 13 Dec 25 11:24 UTC │ 13 Dec 25 11:28 UTC │
	│ delete  │ -p stopped-upgrade-443186                                                                                                                       │ stopped-upgrade-443186    │ jenkins │ v1.37.0 │ 13 Dec 25 11:28 UTC │ 13 Dec 25 11:28 UTC │
	│ start   │ -p running-upgrade-161631 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                            │ running-upgrade-161631    │ jenkins │ v1.35.0 │ 13 Dec 25 11:28 UTC │ 13 Dec 25 11:29 UTC │
	│ start   │ -p running-upgrade-161631 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                        │ running-upgrade-161631    │ jenkins │ v1.37.0 │ 13 Dec 25 11:29 UTC │ 13 Dec 25 11:33 UTC │
	│ delete  │ -p running-upgrade-161631                                                                                                                       │ running-upgrade-161631    │ jenkins │ v1.37.0 │ 13 Dec 25 11:33 UTC │ 13 Dec 25 11:33 UTC │
	│ start   │ -p pause-318241 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                       │ pause-318241              │ jenkins │ v1.37.0 │ 13 Dec 25 11:33 UTC │ 13 Dec 25 11:35 UTC │
	│ start   │ -p pause-318241 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                │ pause-318241              │ jenkins │ v1.37.0 │ 13 Dec 25 11:35 UTC │ 13 Dec 25 11:35 UTC │
	│ pause   │ -p pause-318241 --alsologtostderr -v=5                                                                                                          │ pause-318241              │ jenkins │ v1.37.0 │ 13 Dec 25 11:35 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 11:35:22
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 11:35:22.270391 1119052 out.go:360] Setting OutFile to fd 1 ...
	I1213 11:35:22.278389 1119052 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:35:22.278548 1119052 out.go:374] Setting ErrFile to fd 2...
	I1213 11:35:22.278571 1119052 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:35:22.278888 1119052 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-904040/.minikube/bin
	I1213 11:35:22.279397 1119052 out.go:368] Setting JSON to false
	I1213 11:35:22.280533 1119052 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":22672,"bootTime":1765603051,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1213 11:35:22.281360 1119052 start.go:143] virtualization:  
	I1213 11:35:22.284456 1119052 out.go:179] * [pause-318241] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 11:35:22.289049 1119052 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 11:35:22.289063 1119052 notify.go:221] Checking for updates...
	I1213 11:35:22.292983 1119052 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 11:35:22.296166 1119052 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22128-904040/kubeconfig
	I1213 11:35:22.299058 1119052 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-904040/.minikube
	I1213 11:35:22.301948 1119052 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 11:35:22.304894 1119052 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 11:35:22.308301 1119052 config.go:182] Loaded profile config "pause-318241": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 11:35:22.309092 1119052 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 11:35:22.331532 1119052 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 11:35:22.331647 1119052 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 11:35:22.398328 1119052 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-13 11:35:22.388656933 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 11:35:22.398446 1119052 docker.go:319] overlay module found
	I1213 11:35:22.401601 1119052 out.go:179] * Using the docker driver based on existing profile
	I1213 11:35:22.404405 1119052 start.go:309] selected driver: docker
	I1213 11:35:22.404427 1119052 start.go:927] validating driver "docker" against &{Name:pause-318241 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-318241 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:35:22.404558 1119052 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 11:35:22.404674 1119052 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 11:35:22.467543 1119052 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-13 11:35:22.45765305 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 11:35:22.467997 1119052 cni.go:84] Creating CNI manager for ""
	I1213 11:35:22.468067 1119052 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 11:35:22.468135 1119052 start.go:353] cluster config:
	{Name:pause-318241 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-318241 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:35:22.471301 1119052 out.go:179] * Starting "pause-318241" primary control-plane node in "pause-318241" cluster
	I1213 11:35:22.477374 1119052 cache.go:134] Beginning downloading kic base image for docker with crio
	I1213 11:35:22.480357 1119052 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 11:35:22.483356 1119052 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 11:35:22.483412 1119052 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22128-904040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1213 11:35:22.483423 1119052 cache.go:65] Caching tarball of preloaded images
	I1213 11:35:22.483523 1119052 preload.go:238] Found /home/jenkins/minikube-integration/22128-904040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1213 11:35:22.483547 1119052 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1213 11:35:22.483689 1119052 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/pause-318241/config.json ...
	I1213 11:35:22.483942 1119052 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 11:35:22.511850 1119052 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 11:35:22.511870 1119052 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 11:35:22.511883 1119052 cache.go:243] Successfully downloaded all kic artifacts
	I1213 11:35:22.511918 1119052 start.go:360] acquireMachinesLock for pause-318241: {Name:mkbfa445139c4dfc6002d6ff5760c7517527f5e7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 11:35:22.511973 1119052 start.go:364] duration metric: took 37.932µs to acquireMachinesLock for "pause-318241"
	I1213 11:35:22.511993 1119052 start.go:96] Skipping create...Using existing machine configuration
	I1213 11:35:22.511998 1119052 fix.go:54] fixHost starting: 
	I1213 11:35:22.512265 1119052 cli_runner.go:164] Run: docker container inspect pause-318241 --format={{.State.Status}}
	I1213 11:35:22.534695 1119052 fix.go:112] recreateIfNeeded on pause-318241: state=Running err=<nil>
	W1213 11:35:22.534729 1119052 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 11:35:22.537936 1119052 out.go:252] * Updating the running docker "pause-318241" container ...
	I1213 11:35:22.537971 1119052 machine.go:94] provisionDockerMachine start ...
	I1213 11:35:22.538058 1119052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-318241
	I1213 11:35:22.559551 1119052 main.go:143] libmachine: Using SSH client type: native
	I1213 11:35:22.559877 1119052 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33768 <nil> <nil>}
	I1213 11:35:22.559887 1119052 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 11:35:22.713106 1119052 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-318241
	
	I1213 11:35:22.713137 1119052 ubuntu.go:182] provisioning hostname "pause-318241"
	I1213 11:35:22.713206 1119052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-318241
	I1213 11:35:22.730988 1119052 main.go:143] libmachine: Using SSH client type: native
	I1213 11:35:22.731322 1119052 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33768 <nil> <nil>}
	I1213 11:35:22.731347 1119052 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-318241 && echo "pause-318241" | sudo tee /etc/hostname
	I1213 11:35:22.895627 1119052 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-318241
	
	I1213 11:35:22.895734 1119052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-318241
	I1213 11:35:22.918298 1119052 main.go:143] libmachine: Using SSH client type: native
	I1213 11:35:22.918623 1119052 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33768 <nil> <nil>}
	I1213 11:35:22.918646 1119052 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-318241' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-318241/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-318241' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 11:35:23.073983 1119052 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 11:35:23.074073 1119052 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22128-904040/.minikube CaCertPath:/home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22128-904040/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22128-904040/.minikube}
	I1213 11:35:23.074125 1119052 ubuntu.go:190] setting up certificates
	I1213 11:35:23.074158 1119052 provision.go:84] configureAuth start
	I1213 11:35:23.074255 1119052 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-318241
	I1213 11:35:23.092495 1119052 provision.go:143] copyHostCerts
	I1213 11:35:23.092571 1119052 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-904040/.minikube/ca.pem, removing ...
	I1213 11:35:23.092580 1119052 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-904040/.minikube/ca.pem
	I1213 11:35:23.092653 1119052 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22128-904040/.minikube/ca.pem (1082 bytes)
	I1213 11:35:23.092751 1119052 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-904040/.minikube/cert.pem, removing ...
	I1213 11:35:23.092763 1119052 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-904040/.minikube/cert.pem
	I1213 11:35:23.092790 1119052 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22128-904040/.minikube/cert.pem (1123 bytes)
	I1213 11:35:23.092877 1119052 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-904040/.minikube/key.pem, removing ...
	I1213 11:35:23.092883 1119052 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-904040/.minikube/key.pem
	I1213 11:35:23.092912 1119052 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22128-904040/.minikube/key.pem (1675 bytes)
	I1213 11:35:23.092962 1119052 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22128-904040/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca-key.pem org=jenkins.pause-318241 san=[127.0.0.1 192.168.85.2 localhost minikube pause-318241]
	I1213 11:35:23.229482 1119052 provision.go:177] copyRemoteCerts
	I1213 11:35:23.229608 1119052 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 11:35:23.229668 1119052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-318241
	I1213 11:35:23.251873 1119052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33768 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/pause-318241/id_rsa Username:docker}
	I1213 11:35:23.369516 1119052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 11:35:23.387144 1119052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1213 11:35:23.405593 1119052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 11:35:23.423216 1119052 provision.go:87] duration metric: took 349.014314ms to configureAuth
	I1213 11:35:23.423243 1119052 ubuntu.go:206] setting minikube options for container-runtime
	I1213 11:35:23.423470 1119052 config.go:182] Loaded profile config "pause-318241": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 11:35:23.423589 1119052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-318241
	I1213 11:35:23.442305 1119052 main.go:143] libmachine: Using SSH client type: native
	I1213 11:35:23.442628 1119052 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33768 <nil> <nil>}
	I1213 11:35:23.442652 1119052 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 11:35:28.826935 1119052 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 11:35:28.826958 1119052 machine.go:97] duration metric: took 6.288977988s to provisionDockerMachine
	I1213 11:35:28.826971 1119052 start.go:293] postStartSetup for "pause-318241" (driver="docker")
	I1213 11:35:28.826982 1119052 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 11:35:28.827048 1119052 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 11:35:28.827109 1119052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-318241
	I1213 11:35:28.846063 1119052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33768 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/pause-318241/id_rsa Username:docker}
	I1213 11:35:28.953554 1119052 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 11:35:28.956915 1119052 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 11:35:28.956945 1119052 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 11:35:28.956959 1119052 filesync.go:126] Scanning /home/jenkins/minikube-integration/22128-904040/.minikube/addons for local assets ...
	I1213 11:35:28.957015 1119052 filesync.go:126] Scanning /home/jenkins/minikube-integration/22128-904040/.minikube/files for local assets ...
	I1213 11:35:28.957099 1119052 filesync.go:149] local asset: /home/jenkins/minikube-integration/22128-904040/.minikube/files/etc/ssl/certs/9074842.pem -> 9074842.pem in /etc/ssl/certs
	I1213 11:35:28.957207 1119052 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 11:35:28.964665 1119052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/files/etc/ssl/certs/9074842.pem --> /etc/ssl/certs/9074842.pem (1708 bytes)
	I1213 11:35:28.982982 1119052 start.go:296] duration metric: took 155.994925ms for postStartSetup
	I1213 11:35:28.983065 1119052 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 11:35:28.983128 1119052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-318241
	I1213 11:35:29.001385 1119052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33768 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/pause-318241/id_rsa Username:docker}
	I1213 11:35:29.106990 1119052 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 11:35:29.112299 1119052 fix.go:56] duration metric: took 6.600280102s for fixHost
	I1213 11:35:29.112330 1119052 start.go:83] releasing machines lock for "pause-318241", held for 6.600348419s
	I1213 11:35:29.112426 1119052 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-318241
	I1213 11:35:29.140287 1119052 ssh_runner.go:195] Run: cat /version.json
	I1213 11:35:29.140332 1119052 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 11:35:29.140345 1119052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-318241
	I1213 11:35:29.140399 1119052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-318241
	I1213 11:35:29.159219 1119052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33768 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/pause-318241/id_rsa Username:docker}
	I1213 11:35:29.161253 1119052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33768 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/pause-318241/id_rsa Username:docker}
	I1213 11:35:29.265631 1119052 ssh_runner.go:195] Run: systemctl --version
	I1213 11:35:29.363868 1119052 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 11:35:29.409048 1119052 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 11:35:29.414699 1119052 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 11:35:29.414827 1119052 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 11:35:29.422782 1119052 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 11:35:29.422808 1119052 start.go:496] detecting cgroup driver to use...
	I1213 11:35:29.422860 1119052 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 11:35:29.422937 1119052 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 11:35:29.439399 1119052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 11:35:29.452937 1119052 docker.go:218] disabling cri-docker service (if available) ...
	I1213 11:35:29.453001 1119052 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 11:35:29.470853 1119052 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 11:35:29.484206 1119052 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 11:35:29.629762 1119052 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 11:35:29.771591 1119052 docker.go:234] disabling docker service ...
	I1213 11:35:29.771726 1119052 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 11:35:29.786515 1119052 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 11:35:29.799962 1119052 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 11:35:29.941053 1119052 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 11:35:30.120096 1119052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 11:35:30.136397 1119052 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 11:35:30.154347 1119052 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 11:35:30.154479 1119052 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:35:30.164961 1119052 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 11:35:30.165093 1119052 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:35:30.175512 1119052 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:35:30.185082 1119052 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:35:30.194864 1119052 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 11:35:30.203560 1119052 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:35:30.213309 1119052 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:35:30.222486 1119052 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:35:30.231605 1119052 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 11:35:30.239491 1119052 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 11:35:30.247402 1119052 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:35:30.385703 1119052 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 11:35:30.596707 1119052 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 11:35:30.596782 1119052 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 11:35:30.600592 1119052 start.go:564] Will wait 60s for crictl version
	I1213 11:35:30.600756 1119052 ssh_runner.go:195] Run: which crictl
	I1213 11:35:30.604239 1119052 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 11:35:30.628032 1119052 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1213 11:35:30.628161 1119052 ssh_runner.go:195] Run: crio --version
	I1213 11:35:30.657427 1119052 ssh_runner.go:195] Run: crio --version
	I1213 11:35:30.694200 1119052 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1213 11:35:30.697365 1119052 cli_runner.go:164] Run: docker network inspect pause-318241 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 11:35:30.713360 1119052 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1213 11:35:30.717316 1119052 kubeadm.go:884] updating cluster {Name:pause-318241 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-318241 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 11:35:30.717458 1119052 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 11:35:30.717511 1119052 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 11:35:30.760464 1119052 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 11:35:30.760487 1119052 crio.go:433] Images already preloaded, skipping extraction
	I1213 11:35:30.760553 1119052 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 11:35:30.791999 1119052 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 11:35:30.792026 1119052 cache_images.go:86] Images are preloaded, skipping loading
	I1213 11:35:30.792034 1119052 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.2 crio true true} ...
	I1213 11:35:30.792142 1119052 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-318241 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:pause-318241 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 11:35:30.792235 1119052 ssh_runner.go:195] Run: crio config
	I1213 11:35:30.871353 1119052 cni.go:84] Creating CNI manager for ""
	I1213 11:35:30.871434 1119052 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 11:35:30.871472 1119052 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 11:35:30.871510 1119052 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-318241 NodeName:pause-318241 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 11:35:30.871666 1119052 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-318241"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 11:35:30.871778 1119052 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1213 11:35:30.879877 1119052 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 11:35:30.879954 1119052 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 11:35:30.887862 1119052 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1213 11:35:30.901565 1119052 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 11:35:30.915547 1119052 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1213 11:35:30.928617 1119052 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1213 11:35:30.932501 1119052 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:35:31.061531 1119052 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 11:35:31.076242 1119052 certs.go:69] Setting up /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/pause-318241 for IP: 192.168.85.2
	I1213 11:35:31.076264 1119052 certs.go:195] generating shared ca certs ...
	I1213 11:35:31.076282 1119052 certs.go:227] acquiring lock for ca certs: {Name:mk8a4f8a0a31c02fdf751ce601bdbbea6f5a03e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:35:31.076431 1119052 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22128-904040/.minikube/ca.key
	I1213 11:35:31.076484 1119052 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22128-904040/.minikube/proxy-client-ca.key
	I1213 11:35:31.076497 1119052 certs.go:257] generating profile certs ...
	I1213 11:35:31.076593 1119052 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/pause-318241/client.key
	I1213 11:35:31.076674 1119052 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/pause-318241/apiserver.key.45c3d61f
	I1213 11:35:31.076759 1119052 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/pause-318241/proxy-client.key
	I1213 11:35:31.076898 1119052 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/907484.pem (1338 bytes)
	W1213 11:35:31.076945 1119052 certs.go:480] ignoring /home/jenkins/minikube-integration/22128-904040/.minikube/certs/907484_empty.pem, impossibly tiny 0 bytes
	I1213 11:35:31.076959 1119052 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 11:35:31.076989 1119052 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/ca.pem (1082 bytes)
	I1213 11:35:31.077018 1119052 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/cert.pem (1123 bytes)
	I1213 11:35:31.077048 1119052 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-904040/.minikube/certs/key.pem (1675 bytes)
	I1213 11:35:31.077099 1119052 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-904040/.minikube/files/etc/ssl/certs/9074842.pem (1708 bytes)
	I1213 11:35:31.077796 1119052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 11:35:31.097522 1119052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 11:35:31.117118 1119052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 11:35:31.136729 1119052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 11:35:31.155950 1119052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/pause-318241/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1213 11:35:31.174504 1119052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/pause-318241/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 11:35:31.192150 1119052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/pause-318241/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 11:35:31.209765 1119052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/pause-318241/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 11:35:31.227756 1119052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/certs/907484.pem --> /usr/share/ca-certificates/907484.pem (1338 bytes)
	I1213 11:35:31.245040 1119052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/files/etc/ssl/certs/9074842.pem --> /usr/share/ca-certificates/9074842.pem (1708 bytes)
	I1213 11:35:31.263804 1119052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-904040/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 11:35:31.281616 1119052 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 11:35:31.293769 1119052 ssh_runner.go:195] Run: openssl version
	I1213 11:35:31.300088 1119052 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/907484.pem
	I1213 11:35:31.307331 1119052 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/907484.pem /etc/ssl/certs/907484.pem
	I1213 11:35:31.314654 1119052 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/907484.pem
	I1213 11:35:31.318358 1119052 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 10:21 /usr/share/ca-certificates/907484.pem
	I1213 11:35:31.318430 1119052 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/907484.pem
	I1213 11:35:31.359104 1119052 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 11:35:31.366426 1119052 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9074842.pem
	I1213 11:35:31.373617 1119052 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9074842.pem /etc/ssl/certs/9074842.pem
	I1213 11:35:31.380828 1119052 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9074842.pem
	I1213 11:35:31.384602 1119052 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 10:21 /usr/share/ca-certificates/9074842.pem
	I1213 11:35:31.384670 1119052 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9074842.pem
	I1213 11:35:31.425866 1119052 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 11:35:31.433263 1119052 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:35:31.440734 1119052 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 11:35:31.448145 1119052 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:35:31.451720 1119052 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 10:11 /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:35:31.451787 1119052 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:35:31.493196 1119052 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 11:35:31.500961 1119052 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 11:35:31.505419 1119052 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 11:35:31.547311 1119052 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 11:35:31.589448 1119052 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 11:35:31.630220 1119052 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 11:35:31.670943 1119052 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 11:35:31.711458 1119052 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 11:35:31.751747 1119052 kubeadm.go:401] StartCluster: {Name:pause-318241 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-318241 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:35:31.751876 1119052 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 11:35:31.751951 1119052 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 11:35:31.781827 1119052 cri.go:89] found id: "c9267d0dc5c9802552bb682305aef31054c52ed0d89ffe2c3f55b46be31c0a61"
	I1213 11:35:31.781848 1119052 cri.go:89] found id: "5e89137797c50fc5478141518a43dca331d9631eb6f03d208f10aa436870f230"
	I1213 11:35:31.781854 1119052 cri.go:89] found id: "9c006bdb41bb7a06898f7b334f571f0ac179e8b67a52a16eb6af04c6f6fa60c3"
	I1213 11:35:31.781858 1119052 cri.go:89] found id: "eecbb13a8e76e6a23f875edf9d960195425e9af81b62178678927a293d4850bc"
	I1213 11:35:31.781862 1119052 cri.go:89] found id: "3c425b435a3a8e11b491b65d52eeed23fc1ae1c63629808d7746d91310d263e1"
	I1213 11:35:31.781866 1119052 cri.go:89] found id: "48ff4e8f7de76df1c150e6930beee122666c25feafc8e396c52b55a20cfc961c"
	I1213 11:35:31.781869 1119052 cri.go:89] found id: "9297d505fbac54a9bf63d26ce91114ed0f7da1e1f2f1cbb3bbd9de20d75ecec8"
	I1213 11:35:31.781873 1119052 cri.go:89] found id: ""
	I1213 11:35:31.781925 1119052 ssh_runner.go:195] Run: sudo runc list -f json
	W1213 11:35:31.795227 1119052 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T11:35:31Z" level=error msg="open /run/runc: no such file or directory"
	I1213 11:35:31.795299 1119052 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 11:35:31.802945 1119052 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 11:35:31.803018 1119052 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 11:35:31.803076 1119052 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 11:35:31.810586 1119052 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 11:35:31.811285 1119052 kubeconfig.go:125] found "pause-318241" server: "https://192.168.85.2:8443"
	I1213 11:35:31.812051 1119052 kapi.go:59] client config for pause-318241: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22128-904040/.minikube/profiles/pause-318241/client.crt", KeyFile:"/home/jenkins/minikube-integration/22128-904040/.minikube/profiles/pause-318241/client.key", CAFile:"/home/jenkins/minikube-integration/22128-904040/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 11:35:31.812558 1119052 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1213 11:35:31.812582 1119052 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1213 11:35:31.812587 1119052 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1213 11:35:31.812592 1119052 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1213 11:35:31.812600 1119052 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1213 11:35:31.812851 1119052 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 11:35:31.820421 1119052 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1213 11:35:31.820452 1119052 kubeadm.go:602] duration metric: took 17.420235ms to restartPrimaryControlPlane
	I1213 11:35:31.820463 1119052 kubeadm.go:403] duration metric: took 68.72565ms to StartCluster
	I1213 11:35:31.820478 1119052 settings.go:142] acquiring lock: {Name:mk93988d167ba25bb331a8426f9b2f4ef25dd844 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:35:31.820537 1119052 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22128-904040/kubeconfig
	I1213 11:35:31.821421 1119052 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-904040/kubeconfig: {Name:mk623f80012ba74b924bdfcf4e2ec5178c2702f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:35:31.821669 1119052 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 11:35:31.822014 1119052 config.go:182] Loaded profile config "pause-318241": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 11:35:31.822064 1119052 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 11:35:31.827356 1119052 out.go:179] * Enabled addons: 
	I1213 11:35:31.827364 1119052 out.go:179] * Verifying Kubernetes components...
	I1213 11:35:31.830027 1119052 addons.go:530] duration metric: took 7.960063ms for enable addons: enabled=[]
	I1213 11:35:31.830114 1119052 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:35:31.956307 1119052 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 11:35:31.969701 1119052 node_ready.go:35] waiting up to 6m0s for node "pause-318241" to be "Ready" ...
	I1213 11:35:36.548509 1119052 node_ready.go:49] node "pause-318241" is "Ready"
	I1213 11:35:36.548542 1119052 node_ready.go:38] duration metric: took 4.578805955s for node "pause-318241" to be "Ready" ...
	I1213 11:35:36.548556 1119052 api_server.go:52] waiting for apiserver process to appear ...
	I1213 11:35:36.548616 1119052 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:35:36.568079 1119052 api_server.go:72] duration metric: took 4.746371943s to wait for apiserver process to appear ...
	I1213 11:35:36.568115 1119052 api_server.go:88] waiting for apiserver healthz status ...
	I1213 11:35:36.568135 1119052 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1213 11:35:36.598687 1119052 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1213 11:35:36.598720 1119052 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1213 11:35:37.068245 1119052 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1213 11:35:37.076783 1119052 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1213 11:35:37.076830 1119052 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1213 11:35:37.568271 1119052 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1213 11:35:37.579707 1119052 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1213 11:35:37.579747 1119052 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1213 11:35:38.068252 1119052 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1213 11:35:38.079823 1119052 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1213 11:35:38.080933 1119052 api_server.go:141] control plane version: v1.34.2
	I1213 11:35:38.080980 1119052 api_server.go:131] duration metric: took 1.5128523s to wait for apiserver health ...
	I1213 11:35:38.080990 1119052 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 11:35:38.085191 1119052 system_pods.go:59] 7 kube-system pods found
	I1213 11:35:38.085235 1119052 system_pods.go:61] "coredns-66bc5c9577-zg2b2" [10caf02b-e875-43a4-889f-bf6c434d73dd] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 11:35:38.085247 1119052 system_pods.go:61] "etcd-pause-318241" [c1406768-08f7-46bf-a952-9df2246da639] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 11:35:38.085253 1119052 system_pods.go:61] "kindnet-cn6qx" [4ffe31da-1d55-434e-9821-30f3967fa9b5] Running
	I1213 11:35:38.085262 1119052 system_pods.go:61] "kube-apiserver-pause-318241" [fc56ce75-2daa-48d0-8ef6-2c389e85e550] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1213 11:35:38.085272 1119052 system_pods.go:61] "kube-controller-manager-pause-318241" [dedf20d0-fc79-4e2f-b316-fb14f00600dd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 11:35:38.085288 1119052 system_pods.go:61] "kube-proxy-89wjk" [6feb766a-9bbb-4051-9713-7b7104e86f7b] Running
	I1213 11:35:38.085299 1119052 system_pods.go:61] "kube-scheduler-pause-318241" [96c9bf0f-5d19-4b7b-9623-fb92d3d572fc] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 11:35:38.085306 1119052 system_pods.go:74] duration metric: took 4.308851ms to wait for pod list to return data ...
	I1213 11:35:38.085319 1119052 default_sa.go:34] waiting for default service account to be created ...
	I1213 11:35:38.088386 1119052 default_sa.go:45] found service account: "default"
	I1213 11:35:38.088409 1119052 default_sa.go:55] duration metric: took 3.084823ms for default service account to be created ...
	I1213 11:35:38.088417 1119052 system_pods.go:116] waiting for k8s-apps to be running ...
	I1213 11:35:38.091474 1119052 system_pods.go:86] 7 kube-system pods found
	I1213 11:35:38.091515 1119052 system_pods.go:89] "coredns-66bc5c9577-zg2b2" [10caf02b-e875-43a4-889f-bf6c434d73dd] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 11:35:38.091524 1119052 system_pods.go:89] "etcd-pause-318241" [c1406768-08f7-46bf-a952-9df2246da639] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 11:35:38.091530 1119052 system_pods.go:89] "kindnet-cn6qx" [4ffe31da-1d55-434e-9821-30f3967fa9b5] Running
	I1213 11:35:38.091536 1119052 system_pods.go:89] "kube-apiserver-pause-318241" [fc56ce75-2daa-48d0-8ef6-2c389e85e550] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1213 11:35:38.091543 1119052 system_pods.go:89] "kube-controller-manager-pause-318241" [dedf20d0-fc79-4e2f-b316-fb14f00600dd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 11:35:38.091547 1119052 system_pods.go:89] "kube-proxy-89wjk" [6feb766a-9bbb-4051-9713-7b7104e86f7b] Running
	I1213 11:35:38.091554 1119052 system_pods.go:89] "kube-scheduler-pause-318241" [96c9bf0f-5d19-4b7b-9623-fb92d3d572fc] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 11:35:38.091571 1119052 system_pods.go:126] duration metric: took 3.141717ms to wait for k8s-apps to be running ...
	I1213 11:35:38.091592 1119052 system_svc.go:44] waiting for kubelet service to be running ....
	I1213 11:35:38.091664 1119052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 11:35:38.108005 1119052 system_svc.go:56] duration metric: took 16.404414ms WaitForService to wait for kubelet
	I1213 11:35:38.108046 1119052 kubeadm.go:587] duration metric: took 6.286341749s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 11:35:38.108065 1119052 node_conditions.go:102] verifying NodePressure condition ...
	I1213 11:35:38.113657 1119052 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1213 11:35:38.113692 1119052 node_conditions.go:123] node cpu capacity is 2
	I1213 11:35:38.113706 1119052 node_conditions.go:105] duration metric: took 5.636446ms to run NodePressure ...
	I1213 11:35:38.113718 1119052 start.go:242] waiting for startup goroutines ...
	I1213 11:35:38.113732 1119052 start.go:247] waiting for cluster config update ...
	I1213 11:35:38.113748 1119052 start.go:256] writing updated cluster config ...
	I1213 11:35:38.114093 1119052 ssh_runner.go:195] Run: rm -f paused
	I1213 11:35:38.118390 1119052 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 11:35:38.119080 1119052 kapi.go:59] client config for pause-318241: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22128-904040/.minikube/profiles/pause-318241/client.crt", KeyFile:"/home/jenkins/minikube-integration/22128-904040/.minikube/profiles/pause-318241/client.key", CAFile:"/home/jenkins/minikube-integration/22128-904040/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 11:35:38.126994 1119052 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-zg2b2" in "kube-system" namespace to be "Ready" or be gone ...
	W1213 11:35:40.132373 1119052 pod_ready.go:104] pod "coredns-66bc5c9577-zg2b2" is not "Ready", error: <nil>
	W1213 11:35:42.134196 1119052 pod_ready.go:104] pod "coredns-66bc5c9577-zg2b2" is not "Ready", error: <nil>
	I1213 11:35:44.633056 1119052 pod_ready.go:94] pod "coredns-66bc5c9577-zg2b2" is "Ready"
	I1213 11:35:44.633086 1119052 pod_ready.go:86] duration metric: took 6.506052781s for pod "coredns-66bc5c9577-zg2b2" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 11:35:44.635755 1119052 pod_ready.go:83] waiting for pod "etcd-pause-318241" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 11:35:44.640548 1119052 pod_ready.go:94] pod "etcd-pause-318241" is "Ready"
	I1213 11:35:44.640577 1119052 pod_ready.go:86] duration metric: took 4.788102ms for pod "etcd-pause-318241" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 11:35:44.643123 1119052 pod_ready.go:83] waiting for pod "kube-apiserver-pause-318241" in "kube-system" namespace to be "Ready" or be gone ...
	W1213 11:35:46.648036 1119052 pod_ready.go:104] pod "kube-apiserver-pause-318241" is not "Ready", error: <nil>
	W1213 11:35:48.649188 1119052 pod_ready.go:104] pod "kube-apiserver-pause-318241" is not "Ready", error: <nil>
	I1213 11:35:50.148694 1119052 pod_ready.go:94] pod "kube-apiserver-pause-318241" is "Ready"
	I1213 11:35:50.148720 1119052 pod_ready.go:86] duration metric: took 5.505569752s for pod "kube-apiserver-pause-318241" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 11:35:50.151033 1119052 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-318241" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 11:35:50.155855 1119052 pod_ready.go:94] pod "kube-controller-manager-pause-318241" is "Ready"
	I1213 11:35:50.155887 1119052 pod_ready.go:86] duration metric: took 4.825148ms for pod "kube-controller-manager-pause-318241" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 11:35:50.158500 1119052 pod_ready.go:83] waiting for pod "kube-proxy-89wjk" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 11:35:50.163340 1119052 pod_ready.go:94] pod "kube-proxy-89wjk" is "Ready"
	I1213 11:35:50.163365 1119052 pod_ready.go:86] duration metric: took 4.793935ms for pod "kube-proxy-89wjk" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 11:35:50.166019 1119052 pod_ready.go:83] waiting for pod "kube-scheduler-pause-318241" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 11:35:50.430692 1119052 pod_ready.go:94] pod "kube-scheduler-pause-318241" is "Ready"
	I1213 11:35:50.430723 1119052 pod_ready.go:86] duration metric: took 264.681075ms for pod "kube-scheduler-pause-318241" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 11:35:50.430737 1119052 pod_ready.go:40] duration metric: took 12.312305208s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 11:35:50.484118 1119052 start.go:625] kubectl: 1.33.2, cluster: 1.34.2 (minor skew: 1)
	I1213 11:35:50.487315 1119052 out.go:179] * Done! kubectl is now configured to use "pause-318241" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 13 11:35:32 pause-318241 crio[2100]: time="2025-12-13T11:35:32.449157079Z" level=info msg="Created container e3b0a51b8f9711ceb369990f51edf8acc6d7eabc0f3b773df226f5baac8bd05a: kube-system/etcd-pause-318241/etcd" id=4fe7b47d-bdd7-4903-8577-dea117296086 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 11:35:32 pause-318241 crio[2100]: time="2025-12-13T11:35:32.450310166Z" level=info msg="Starting container: e3b0a51b8f9711ceb369990f51edf8acc6d7eabc0f3b773df226f5baac8bd05a" id=3418ffe3-1fce-49fc-b990-07f56852843a name=/runtime.v1.RuntimeService/StartContainer
	Dec 13 11:35:32 pause-318241 crio[2100]: time="2025-12-13T11:35:32.454433374Z" level=info msg="Started container" PID=2390 containerID=2c5fd5eef06a9d7b5663c2e3b869895d71b8131c79925bb9982948c1a5f19c3c description=kube-system/coredns-66bc5c9577-zg2b2/coredns id=a9ff23b3-3cc0-48f8-ae6c-dc1a5655f98c name=/runtime.v1.RuntimeService/StartContainer sandboxID=ebcf9d1d4a0be828b273e5a4033222bd2207adad074789a4869b1562f722cac4
	Dec 13 11:35:32 pause-318241 crio[2100]: time="2025-12-13T11:35:32.463933292Z" level=info msg="Started container" PID=2400 containerID=e3b0a51b8f9711ceb369990f51edf8acc6d7eabc0f3b773df226f5baac8bd05a description=kube-system/etcd-pause-318241/etcd id=3418ffe3-1fce-49fc-b990-07f56852843a name=/runtime.v1.RuntimeService/StartContainer sandboxID=b801179981ad68000ddc3231258864349d7e708b2a31a2909728e2e23dddb2a0
	Dec 13 11:35:32 pause-318241 crio[2100]: time="2025-12-13T11:35:32.471556244Z" level=info msg="Created container 162230a9828f17339fad434e48c14a2c32bfc3edd1fdd6bb4e94598a1483bdb1: kube-system/kube-controller-manager-pause-318241/kube-controller-manager" id=5216e7f1-605b-48be-a9aa-2e398250688e name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 11:35:32 pause-318241 crio[2100]: time="2025-12-13T11:35:32.472144246Z" level=info msg="Starting container: 162230a9828f17339fad434e48c14a2c32bfc3edd1fdd6bb4e94598a1483bdb1" id=00225c47-380b-40a3-be47-c809d1e1d69e name=/runtime.v1.RuntimeService/StartContainer
	Dec 13 11:35:32 pause-318241 crio[2100]: time="2025-12-13T11:35:32.485089188Z" level=info msg="Started container" PID=2406 containerID=162230a9828f17339fad434e48c14a2c32bfc3edd1fdd6bb4e94598a1483bdb1 description=kube-system/kube-controller-manager-pause-318241/kube-controller-manager id=00225c47-380b-40a3-be47-c809d1e1d69e name=/runtime.v1.RuntimeService/StartContainer sandboxID=d36a48f2bb5b58c6b3940026dc2a98d13fe977627e5fb318f1aec1a4326b18a9
	Dec 13 11:35:32 pause-318241 crio[2100]: time="2025-12-13T11:35:32.610077142Z" level=info msg="Created container 100faf4b827044c1b3d999090e8900ba9801e797318aecfb111a8a37c5461121: kube-system/kube-proxy-89wjk/kube-proxy" id=a21872a9-76f1-4e2e-b666-8b25653e8ada name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 11:35:32 pause-318241 crio[2100]: time="2025-12-13T11:35:32.610709403Z" level=info msg="Starting container: 100faf4b827044c1b3d999090e8900ba9801e797318aecfb111a8a37c5461121" id=9be1798e-891e-4fc8-9755-ef635483a7ca name=/runtime.v1.RuntimeService/StartContainer
	Dec 13 11:35:32 pause-318241 crio[2100]: time="2025-12-13T11:35:32.613129357Z" level=info msg="Started container" PID=2436 containerID=100faf4b827044c1b3d999090e8900ba9801e797318aecfb111a8a37c5461121 description=kube-system/kube-proxy-89wjk/kube-proxy id=9be1798e-891e-4fc8-9755-ef635483a7ca name=/runtime.v1.RuntimeService/StartContainer sandboxID=4bc3bf70ee1abd2d653880402b8164fb27551a22eb66ed7301bb00e5495c8da7
	Dec 13 11:35:42 pause-318241 crio[2100]: time="2025-12-13T11:35:42.727090469Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 13 11:35:42 pause-318241 crio[2100]: time="2025-12-13T11:35:42.73152308Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 13 11:35:42 pause-318241 crio[2100]: time="2025-12-13T11:35:42.731558223Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 13 11:35:42 pause-318241 crio[2100]: time="2025-12-13T11:35:42.731580516Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 13 11:35:42 pause-318241 crio[2100]: time="2025-12-13T11:35:42.734840824Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 13 11:35:42 pause-318241 crio[2100]: time="2025-12-13T11:35:42.734888143Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 13 11:35:42 pause-318241 crio[2100]: time="2025-12-13T11:35:42.73491111Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 13 11:35:42 pause-318241 crio[2100]: time="2025-12-13T11:35:42.73844008Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 13 11:35:42 pause-318241 crio[2100]: time="2025-12-13T11:35:42.738476224Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 13 11:35:42 pause-318241 crio[2100]: time="2025-12-13T11:35:42.738499215Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 13 11:35:42 pause-318241 crio[2100]: time="2025-12-13T11:35:42.741973457Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 13 11:35:42 pause-318241 crio[2100]: time="2025-12-13T11:35:42.742014475Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 13 11:35:42 pause-318241 crio[2100]: time="2025-12-13T11:35:42.742039862Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 13 11:35:42 pause-318241 crio[2100]: time="2025-12-13T11:35:42.745404983Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 13 11:35:42 pause-318241 crio[2100]: time="2025-12-13T11:35:42.745447035Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	100faf4b82704       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786   23 seconds ago       Running             kube-proxy                1                   4bc3bf70ee1ab       kube-proxy-89wjk                       kube-system
	162230a9828f1       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2   23 seconds ago       Running             kube-controller-manager   1                   d36a48f2bb5b5       kube-controller-manager-pause-318241   kube-system
	e3b0a51b8f971       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42   23 seconds ago       Running             etcd                      1                   b801179981ad6       etcd-pause-318241                      kube-system
	092092a5e6fbf       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7   23 seconds ago       Running             kube-apiserver            1                   7aea9716b33be       kube-apiserver-pause-318241            kube-system
	431e449a50a15       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949   23 seconds ago       Running             kube-scheduler            1                   ae12b00f0c7d0       kube-scheduler-pause-318241            kube-system
	2c5fd5eef06a9       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   23 seconds ago       Running             coredns                   1                   ebcf9d1d4a0be       coredns-66bc5c9577-zg2b2               kube-system
	d09d707c6449d       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   23 seconds ago       Running             kindnet-cni               1                   58f0d6c82b89e       kindnet-cn6qx                          kube-system
	c9267d0dc5c98       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   36 seconds ago       Exited              coredns                   0                   ebcf9d1d4a0be       coredns-66bc5c9577-zg2b2               kube-system
	5e89137797c50       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786   About a minute ago   Exited              kube-proxy                0                   4bc3bf70ee1ab       kube-proxy-89wjk                       kube-system
	9c006bdb41bb7       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               0                   58f0d6c82b89e       kindnet-cn6qx                          kube-system
	eecbb13a8e76e       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42   About a minute ago   Exited              etcd                      0                   b801179981ad6       etcd-pause-318241                      kube-system
	3c425b435a3a8       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949   About a minute ago   Exited              kube-scheduler            0                   ae12b00f0c7d0       kube-scheduler-pause-318241            kube-system
	48ff4e8f7de76       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2   About a minute ago   Exited              kube-controller-manager   0                   d36a48f2bb5b5       kube-controller-manager-pause-318241   kube-system
	9297d505fbac5       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7   About a minute ago   Exited              kube-apiserver            0                   7aea9716b33be       kube-apiserver-pause-318241            kube-system
	
	
	==> coredns [2c5fd5eef06a9d7b5663c2e3b869895d71b8131c79925bb9982948c1a5f19c3c] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:44284 - 62561 "HINFO IN 6452898302968458127.7567510728403302696. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.034839298s
	
	
	==> coredns [c9267d0dc5c9802552bb682305aef31054c52ed0d89ffe2c3f55b46be31c0a61] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:38091 - 34453 "HINFO IN 1830135088201327870.2700890322571675326. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.024041183s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-318241
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-318241
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fb16b7642350f383695d44d1e88d7327f6f14453
	                    minikube.k8s.io/name=pause-318241
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_13T11_34_34_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 13 Dec 2025 11:34:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-318241
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 13 Dec 2025 11:35:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 13 Dec 2025 11:35:46 +0000   Sat, 13 Dec 2025 11:34:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 13 Dec 2025 11:35:46 +0000   Sat, 13 Dec 2025 11:34:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 13 Dec 2025 11:35:46 +0000   Sat, 13 Dec 2025 11:34:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 13 Dec 2025 11:35:46 +0000   Sat, 13 Dec 2025 11:35:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-318241
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 78f85184c267cd52312ad0096937f858
	  System UUID:                5c979713-b6da-4228-ac0e-da304970a9da
	  Boot ID:                    ff73813c-a05d-46ba-ba43-f4a4c3dc42b1
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-zg2b2                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     78s
	  kube-system                 etcd-pause-318241                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         83s
	  kube-system                 kindnet-cn6qx                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      78s
	  kube-system                 kube-apiserver-pause-318241             250m (12%)    0 (0%)      0 (0%)           0 (0%)         83s
	  kube-system                 kube-controller-manager-pause-318241    200m (10%)    0 (0%)      0 (0%)           0 (0%)         83s
	  kube-system                 kube-proxy-89wjk                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         78s
	  kube-system                 kube-scheduler-pause-318241             100m (5%)     0 (0%)      0 (0%)           0 (0%)         83s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 77s                kube-proxy       
	  Normal   Starting                 18s                kube-proxy       
	  Normal   NodeHasSufficientPID     91s (x8 over 91s)  kubelet          Node pause-318241 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 91s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  91s (x8 over 91s)  kubelet          Node pause-318241 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    91s (x8 over 91s)  kubelet          Node pause-318241 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 91s                kubelet          Starting kubelet.
	  Normal   Starting                 83s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 83s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  83s                kubelet          Node pause-318241 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    83s                kubelet          Node pause-318241 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     83s                kubelet          Node pause-318241 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           79s                node-controller  Node pause-318241 event: Registered Node pause-318241 in Controller
	  Normal   NodeReady                37s                kubelet          Node pause-318241 status is now: NodeReady
	  Normal   RegisteredNode           17s                node-controller  Node pause-318241 event: Registered Node pause-318241 in Controller
	
	
	==> dmesg <==
	[Dec13 10:59] overlayfs: idmapped layers are currently not supported
	[Dec13 11:00] overlayfs: idmapped layers are currently not supported
	[Dec13 11:01] overlayfs: idmapped layers are currently not supported
	[  +3.910612] overlayfs: idmapped layers are currently not supported
	[Dec13 11:02] overlayfs: idmapped layers are currently not supported
	[Dec13 11:03] overlayfs: idmapped layers are currently not supported
	[Dec13 11:04] overlayfs: idmapped layers are currently not supported
	[Dec13 11:09] overlayfs: idmapped layers are currently not supported
	[ +31.625971] overlayfs: idmapped layers are currently not supported
	[Dec13 11:10] overlayfs: idmapped layers are currently not supported
	[Dec13 11:12] overlayfs: idmapped layers are currently not supported
	[Dec13 11:13] overlayfs: idmapped layers are currently not supported
	[Dec13 11:14] overlayfs: idmapped layers are currently not supported
	[Dec13 11:15] overlayfs: idmapped layers are currently not supported
	[  +7.705175] overlayfs: idmapped layers are currently not supported
	[Dec13 11:16] overlayfs: idmapped layers are currently not supported
	[ +26.259109] overlayfs: idmapped layers are currently not supported
	[Dec13 11:17] overlayfs: idmapped layers are currently not supported
	[ +22.550073] overlayfs: idmapped layers are currently not supported
	[Dec13 11:18] overlayfs: idmapped layers are currently not supported
	[Dec13 11:20] overlayfs: idmapped layers are currently not supported
	[Dec13 11:22] overlayfs: idmapped layers are currently not supported
	[Dec13 11:23] overlayfs: idmapped layers are currently not supported
	[Dec13 11:31] kauditd_printk_skb: 8 callbacks suppressed
	[Dec13 11:34] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [e3b0a51b8f9711ceb369990f51edf8acc6d7eabc0f3b773df226f5baac8bd05a] <==
	{"level":"warn","ts":"2025-12-13T11:35:34.559966Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50376","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:35:34.585907Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:35:34.607237Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:35:34.631782Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:35:34.646001Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50440","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:35:34.662206Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:35:34.686508Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50494","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:35:34.700747Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50516","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:35:34.747379Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:35:34.767215Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:35:34.788913Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:35:34.811222Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50600","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:35:34.826791Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:35:34.856683Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:35:34.877962Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:35:34.893789Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50692","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:35:34.921687Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:35:34.947170Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:35:34.965367Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50758","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:35:34.998383Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:35:35.045835Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:35:35.054923Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:35:35.079588Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50824","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:35:35.100500Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50846","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:35:35.193985Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50862","server-name":"","error":"EOF"}
	
	
	==> etcd [eecbb13a8e76e6a23f875edf9d960195425e9af81b62178678927a293d4850bc] <==
	{"level":"warn","ts":"2025-12-13T11:34:29.192220Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35682","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:34:29.211272Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:34:29.231143Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35732","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:34:29.252782Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35756","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:34:29.269827Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:34:29.284307Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35796","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:34:29.353757Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35818","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-13T11:35:23.630198Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-13T11:35:23.630278Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-318241","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	{"level":"error","ts":"2025-12-13T11:35:23.630509Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-13T11:35:23.783320Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-13T11:35:23.783428Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-13T11:35:23.783453Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9f0758e1c58a86ed","current-leader-member-id":"9f0758e1c58a86ed"}
	{"level":"info","ts":"2025-12-13T11:35:23.783540Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-12-13T11:35:23.783557Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-12-13T11:35:23.783606Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-13T11:35:23.783674Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-13T11:35:23.783727Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-13T11:35:23.783805Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-13T11:35:23.783828Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-13T11:35:23.783836Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-13T11:35:23.786818Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"error","ts":"2025-12-13T11:35:23.786904Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-13T11:35:23.786934Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-13T11:35:23.786949Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-318241","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	
	
	==> kernel <==
	 11:35:56 up  6:18,  0 user,  load average: 1.73, 1.37, 1.59
	Linux pause-318241 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [9c006bdb41bb7a06898f7b334f571f0ac179e8b67a52a16eb6af04c6f6fa60c3] <==
	I1213 11:34:38.712506       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1213 11:34:38.713581       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1213 11:34:38.713728       1 main.go:148] setting mtu 1500 for CNI 
	I1213 11:34:38.713740       1 main.go:178] kindnetd IP family: "ipv4"
	I1213 11:34:38.713752       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-13T11:34:38Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1213 11:34:38.913166       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1213 11:34:38.913185       1 controller.go:381] "Waiting for informer caches to sync"
	I1213 11:34:38.913193       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1213 11:34:38.913517       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1213 11:35:08.912768       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1213 11:35:08.913731       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1213 11:35:08.913748       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1213 11:35:08.913836       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1213 11:35:10.313344       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1213 11:35:10.313448       1 metrics.go:72] Registering metrics
	I1213 11:35:10.313523       1 controller.go:711] "Syncing nftables rules"
	I1213 11:35:18.918102       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1213 11:35:18.918156       1 main.go:301] handling current node
	
	
	==> kindnet [d09d707c6449d5c8655c76e36cdbd8a6b6047eb6ae5c89fc89a58a57e3ee51fe] <==
	I1213 11:35:32.525293       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1213 11:35:32.535355       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1213 11:35:32.536636       1 main.go:148] setting mtu 1500 for CNI 
	I1213 11:35:32.536702       1 main.go:178] kindnetd IP family: "ipv4"
	I1213 11:35:32.536741       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-13T11:35:32Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1213 11:35:32.730711       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1213 11:35:32.730817       1 controller.go:381] "Waiting for informer caches to sync"
	I1213 11:35:32.730853       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1213 11:35:32.731214       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1213 11:35:36.631164       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1213 11:35:36.631270       1 metrics.go:72] Registering metrics
	I1213 11:35:36.631375       1 controller.go:711] "Syncing nftables rules"
	I1213 11:35:42.726584       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1213 11:35:42.726742       1 main.go:301] handling current node
	I1213 11:35:52.727064       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1213 11:35:52.727098       1 main.go:301] handling current node
	
	
	==> kube-apiserver [092092a5e6fbfc1509140795d2624c91baaa2f8e8aca4835a3c725f7a0a68236] <==
	I1213 11:35:36.487363       1 aggregator.go:171] initial CRD sync complete...
	I1213 11:35:36.487392       1 autoregister_controller.go:144] Starting autoregister controller
	I1213 11:35:36.487399       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1213 11:35:36.487406       1 cache.go:39] Caches are synced for autoregister controller
	I1213 11:35:36.489300       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1213 11:35:36.489517       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1213 11:35:36.489549       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1213 11:35:36.489566       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1213 11:35:36.503015       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1213 11:35:36.503782       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1213 11:35:36.514536       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1213 11:35:36.514723       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1213 11:35:36.515347       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1213 11:35:36.522457       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1213 11:35:36.587356       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1213 11:35:36.587865       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	E1213 11:35:36.602347       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1213 11:35:36.634275       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1213 11:35:36.634337       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1213 11:35:37.091984       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1213 11:35:37.815634       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1213 11:35:39.169464       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1213 11:35:39.467737       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1213 11:35:39.517304       1 controller.go:667] quota admission added evaluator for: endpoints
	I1213 11:35:39.569158       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-apiserver [9297d505fbac54a9bf63d26ce91114ed0f7da1e1f2f1cbb3bbd9de20d75ecec8] <==
	W1213 11:35:23.657872       1 logging.go:55] [core] [Channel #99 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 11:35:23.657961       1 logging.go:55] [core] [Channel #39 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 11:35:23.658017       1 logging.go:55] [core] [Channel #167 SubChannel #169]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 11:35:23.658071       1 logging.go:55] [core] [Channel #43 SubChannel #45]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 11:35:23.658121       1 logging.go:55] [core] [Channel #63 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 11:35:23.658169       1 logging.go:55] [core] [Channel #131 SubChannel #133]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 11:35:23.658218       1 logging.go:55] [core] [Channel #115 SubChannel #117]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 11:35:23.658268       1 logging.go:55] [core] [Channel #139 SubChannel #141]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 11:35:23.658317       1 logging.go:55] [core] [Channel #179 SubChannel #181]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 11:35:23.658367       1 logging.go:55] [core] [Channel #203 SubChannel #205]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 11:35:23.658419       1 logging.go:55] [core] [Channel #135 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 11:35:23.658471       1 logging.go:55] [core] [Channel #31 SubChannel #33]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 11:35:23.658519       1 logging.go:55] [core] [Channel #223 SubChannel #225]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 11:35:23.658572       1 logging.go:55] [core] [Channel #227 SubChannel #229]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 11:35:23.658623       1 logging.go:55] [core] [Channel #251 SubChannel #253]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 11:35:23.658673       1 logging.go:55] [core] [Channel #59 SubChannel #61]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 11:35:23.658724       1 logging.go:55] [core] [Channel #119 SubChannel #121]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 11:35:23.658773       1 logging.go:55] [core] [Channel #155 SubChannel #157]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 11:35:23.658825       1 logging.go:55] [core] [Channel #13 SubChannel #15]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 11:35:23.658908       1 logging.go:55] [core] [Channel #127 SubChannel #129]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 11:35:23.658962       1 logging.go:55] [core] [Channel #183 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 11:35:23.659012       1 logging.go:55] [core] [Channel #231 SubChannel #233]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 11:35:23.659148       1 logging.go:55] [core] [Channel #103 SubChannel #105]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 11:35:23.659220       1 logging.go:55] [core] [Channel #47 SubChannel #49]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [162230a9828f17339fad434e48c14a2c32bfc3edd1fdd6bb4e94598a1483bdb1] <==
	I1213 11:35:39.168956       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1213 11:35:39.170347       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1213 11:35:39.171540       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1213 11:35:39.174027       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1213 11:35:39.176124       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1213 11:35:39.177305       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1213 11:35:39.181724       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1213 11:35:39.181746       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1213 11:35:39.181752       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1213 11:35:39.183697       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1213 11:35:39.185883       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1213 11:35:39.187284       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1213 11:35:39.196153       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1213 11:35:39.196266       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1213 11:35:39.196347       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-318241"
	I1213 11:35:39.196394       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1213 11:35:39.196581       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1213 11:35:39.199084       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1213 11:35:39.200662       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1213 11:35:39.211341       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1213 11:35:39.211428       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1213 11:35:39.211848       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1213 11:35:39.213000       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1213 11:35:39.214194       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1213 11:35:39.216372       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	
	
	==> kube-controller-manager [48ff4e8f7de76df1c150e6930beee122666c25feafc8e396c52b55a20cfc961c] <==
	I1213 11:34:37.154618       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-318241"
	I1213 11:34:37.154678       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1213 11:34:37.155207       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1213 11:34:37.155422       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1213 11:34:37.155812       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1213 11:34:37.162059       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1213 11:34:37.168072       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-318241" podCIDRs=["10.244.0.0/24"]
	I1213 11:34:37.168147       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1213 11:34:37.168276       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1213 11:34:37.177598       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1213 11:34:37.182162       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1213 11:34:37.185886       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1213 11:34:37.200256       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1213 11:34:37.201266       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1213 11:34:37.205342       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1213 11:34:37.205850       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1213 11:34:37.205934       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1213 11:34:37.206139       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1213 11:34:37.206184       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1213 11:34:37.206213       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1213 11:34:37.206320       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1213 11:34:37.206454       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1213 11:34:37.206538       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1213 11:34:37.206793       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1213 11:35:22.160283       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [100faf4b827044c1b3d999090e8900ba9801e797318aecfb111a8a37c5461121] <==
	I1213 11:35:36.763084       1 server_linux.go:53] "Using iptables proxy"
	I1213 11:35:37.103749       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1213 11:35:37.208396       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1213 11:35:37.208448       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1213 11:35:37.208539       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1213 11:35:37.609682       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1213 11:35:37.609821       1 server_linux.go:132] "Using iptables Proxier"
	I1213 11:35:37.628017       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1213 11:35:37.628390       1 server.go:527] "Version info" version="v1.34.2"
	I1213 11:35:37.628455       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 11:35:37.642733       1 config.go:200] "Starting service config controller"
	I1213 11:35:37.642765       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1213 11:35:37.642814       1 config.go:106] "Starting endpoint slice config controller"
	I1213 11:35:37.642820       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1213 11:35:37.642834       1 config.go:403] "Starting serviceCIDR config controller"
	I1213 11:35:37.642844       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1213 11:35:37.644148       1 config.go:309] "Starting node config controller"
	I1213 11:35:37.644170       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1213 11:35:37.644178       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1213 11:35:37.743397       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1213 11:35:37.743512       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1213 11:35:37.743586       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [5e89137797c50fc5478141518a43dca331d9631eb6f03d208f10aa436870f230] <==
	I1213 11:34:38.678453       1 server_linux.go:53] "Using iptables proxy"
	I1213 11:34:38.875057       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1213 11:34:38.975785       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1213 11:34:38.975818       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1213 11:34:38.975885       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1213 11:34:39.043797       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1213 11:34:39.043862       1 server_linux.go:132] "Using iptables Proxier"
	I1213 11:34:39.050766       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1213 11:34:39.051056       1 server.go:527] "Version info" version="v1.34.2"
	I1213 11:34:39.051076       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 11:34:39.053248       1 config.go:200] "Starting service config controller"
	I1213 11:34:39.053266       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1213 11:34:39.053279       1 config.go:106] "Starting endpoint slice config controller"
	I1213 11:34:39.053284       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1213 11:34:39.053299       1 config.go:403] "Starting serviceCIDR config controller"
	I1213 11:34:39.053303       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1213 11:34:39.054089       1 config.go:309] "Starting node config controller"
	I1213 11:34:39.054158       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1213 11:34:39.054186       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1213 11:34:39.153836       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1213 11:34:39.153845       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1213 11:34:39.153863       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [3c425b435a3a8e11b491b65d52eeed23fc1ae1c63629808d7746d91310d263e1] <==
	E1213 11:34:30.196004       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1213 11:34:30.196100       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1213 11:34:30.196168       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1213 11:34:31.001786       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1213 11:34:31.022659       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1213 11:34:31.078766       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1213 11:34:31.140005       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1213 11:34:31.176534       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1213 11:34:31.194410       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1213 11:34:31.195298       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1213 11:34:31.197640       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1213 11:34:31.221261       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1213 11:34:31.333952       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1213 11:34:31.377356       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1213 11:34:31.390747       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1213 11:34:31.424317       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1213 11:34:31.448653       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1213 11:34:31.482442       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	I1213 11:34:33.180188       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1213 11:35:23.617366       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1213 11:35:23.617529       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1213 11:35:23.617726       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1213 11:35:23.617760       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1213 11:35:23.623729       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1213 11:35:23.623769       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [431e449a50a15c97ddcd5984e57f5efe1f17a676daffe8932f907551ab539972] <==
	I1213 11:35:36.646831       1 serving.go:386] Generated self-signed cert in-memory
	I1213 11:35:38.161886       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1213 11:35:38.162297       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 11:35:38.170451       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1213 11:35:38.170626       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1213 11:35:38.170689       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1213 11:35:38.170748       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1213 11:35:38.172358       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1213 11:35:38.173595       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1213 11:35:38.173702       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1213 11:35:38.173736       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1213 11:35:38.271689       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1213 11:35:38.273766       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1213 11:35:38.274595       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Dec 13 11:35:32 pause-318241 kubelet[1339]: E1213 11:35:32.300792    1339 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-318241\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="55a327512e1f31b42d1108b407175237" pod="kube-system/kube-scheduler-pause-318241"
	Dec 13 11:35:32 pause-318241 kubelet[1339]: E1213 11:35:32.301147    1339 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-318241\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="25cd5dd8fe5b10afb6cc108fece163fd" pod="kube-system/etcd-pause-318241"
	Dec 13 11:35:32 pause-318241 kubelet[1339]: E1213 11:35:32.301380    1339 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-318241\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="bd0fdbee19a0f8584b5bffca5b0b933e" pod="kube-system/kube-controller-manager-pause-318241"
	Dec 13 11:35:32 pause-318241 kubelet[1339]: E1213 11:35:32.301808    1339 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-cn6qx\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="4ffe31da-1d55-434e-9821-30f3967fa9b5" pod="kube-system/kindnet-cn6qx"
	Dec 13 11:35:32 pause-318241 kubelet[1339]: I1213 11:35:32.314884    1339 scope.go:117] "RemoveContainer" containerID="9297d505fbac54a9bf63d26ce91114ed0f7da1e1f2f1cbb3bbd9de20d75ecec8"
	Dec 13 11:35:32 pause-318241 kubelet[1339]: E1213 11:35:32.315547    1339 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-318241\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="55a327512e1f31b42d1108b407175237" pod="kube-system/kube-scheduler-pause-318241"
	Dec 13 11:35:32 pause-318241 kubelet[1339]: E1213 11:35:32.315814    1339 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-318241\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="25cd5dd8fe5b10afb6cc108fece163fd" pod="kube-system/etcd-pause-318241"
	Dec 13 11:35:32 pause-318241 kubelet[1339]: E1213 11:35:32.315982    1339 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-318241\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="bd0fdbee19a0f8584b5bffca5b0b933e" pod="kube-system/kube-controller-manager-pause-318241"
	Dec 13 11:35:32 pause-318241 kubelet[1339]: E1213 11:35:32.316135    1339 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-cn6qx\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="4ffe31da-1d55-434e-9821-30f3967fa9b5" pod="kube-system/kindnet-cn6qx"
	Dec 13 11:35:32 pause-318241 kubelet[1339]: E1213 11:35:32.316349    1339 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-zg2b2\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="10caf02b-e875-43a4-889f-bf6c434d73dd" pod="kube-system/coredns-66bc5c9577-zg2b2"
	Dec 13 11:35:32 pause-318241 kubelet[1339]: E1213 11:35:32.316498    1339 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-318241\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="c9721d581b19638a792d8b55bb352970" pod="kube-system/kube-apiserver-pause-318241"
	Dec 13 11:35:32 pause-318241 kubelet[1339]: I1213 11:35:32.335075    1339 scope.go:117] "RemoveContainer" containerID="5e89137797c50fc5478141518a43dca331d9631eb6f03d208f10aa436870f230"
	Dec 13 11:35:32 pause-318241 kubelet[1339]: E1213 11:35:32.335806    1339 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-318241\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="bd0fdbee19a0f8584b5bffca5b0b933e" pod="kube-system/kube-controller-manager-pause-318241"
	Dec 13 11:35:32 pause-318241 kubelet[1339]: E1213 11:35:32.336171    1339 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-cn6qx\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="4ffe31da-1d55-434e-9821-30f3967fa9b5" pod="kube-system/kindnet-cn6qx"
	Dec 13 11:35:32 pause-318241 kubelet[1339]: E1213 11:35:32.336382    1339 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-89wjk\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="6feb766a-9bbb-4051-9713-7b7104e86f7b" pod="kube-system/kube-proxy-89wjk"
	Dec 13 11:35:32 pause-318241 kubelet[1339]: E1213 11:35:32.336725    1339 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-zg2b2\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="10caf02b-e875-43a4-889f-bf6c434d73dd" pod="kube-system/coredns-66bc5c9577-zg2b2"
	Dec 13 11:35:32 pause-318241 kubelet[1339]: E1213 11:35:32.336932    1339 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-318241\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="c9721d581b19638a792d8b55bb352970" pod="kube-system/kube-apiserver-pause-318241"
	Dec 13 11:35:32 pause-318241 kubelet[1339]: E1213 11:35:32.337265    1339 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-318241\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="55a327512e1f31b42d1108b407175237" pod="kube-system/kube-scheduler-pause-318241"
	Dec 13 11:35:32 pause-318241 kubelet[1339]: E1213 11:35:32.337446    1339 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-318241\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="25cd5dd8fe5b10afb6cc108fece163fd" pod="kube-system/etcd-pause-318241"
	Dec 13 11:35:36 pause-318241 kubelet[1339]: E1213 11:35:36.429898    1339 reflector.go:205] "Failed to watch" err="configmaps \"kube-proxy\" is forbidden: User \"system:node:pause-318241\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-318241' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Dec 13 11:35:36 pause-318241 kubelet[1339]: E1213 11:35:36.430688    1339 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-318241\" is forbidden: User \"system:node:pause-318241\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-318241' and this object" podUID="bd0fdbee19a0f8584b5bffca5b0b933e" pod="kube-system/kube-controller-manager-pause-318241"
	Dec 13 11:35:43 pause-318241 kubelet[1339]: W1213 11:35:43.341481    1339 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Dec 13 11:35:50 pause-318241 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 13 11:35:51 pause-318241 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 13 11:35:51 pause-318241 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-318241 -n pause-318241
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-318241 -n pause-318241: exit status 2 (353.08912ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context pause-318241 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (6.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (7200.078s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-509500 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E1213 12:10:25.726327  907484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-769798/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 12:10:26.896994  907484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/auto-509500/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
panic: test timed out after 2h0m0s
	running tests:
		TestNetworkPlugins (34m49s)
		TestNetworkPlugins/group/enable-default-cni (39s)
		TestNetworkPlugins/group/enable-default-cni/Start (39s)
		TestNetworkPlugins/group/flannel (35s)
		TestNetworkPlugins/group/flannel/Start (35s)
		TestStartStop (36m56s)
		TestStartStop/group (39s)

                                                
                                                
goroutine 6417 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2682 +0x2b0
created by time.goFunc
	/usr/local/go/src/time/sleep.go:215 +0x38

                                                
                                                
goroutine 1 [chan receive, 30 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1891 +0x3d0
testing.tRunner(0x40002e0a80, 0x40007abbb8)
	/usr/local/go/src/testing/testing.go:1940 +0x104
testing.runTests(0x4000728030, {0x534c680, 0x2c, 0x2c}, {0x40007abd08?, 0x125774?, 0x5375080?})
	/usr/local/go/src/testing/testing.go:2475 +0x3b8
testing.(*M).Run(0x40007135e0)
	/usr/local/go/src/testing/testing.go:2337 +0x530
k8s.io/minikube/test/integration.TestMain(0x40007135e0)
	/home/jenkins/workspace/Build_Cross/test/integration/main_test.go:64 +0xf0
main.main()
	_testmain.go:133 +0x88

                                                
                                                
goroutine 3604 [chan receive, 34 minutes]:
testing.(*testState).waitParallel(0x4000070ff0)
	/usr/local/go/src/testing/testing.go:2116 +0x158
testing.(*T).Parallel(0x40014a96c0)
	/usr/local/go/src/testing/testing.go:1709 +0x19c
k8s.io/minikube/test/integration.MaybeParallel(0x40014a96c0)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:501 +0x5c
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0x40014a96c0)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x2c0
testing.tRunner(0x40014a96c0, 0x4000476c80)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 3515
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 5284 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff660, {{0x36f42d0, 0x40001bc080?}, 0x4001413e00?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 5260
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 861 [chan receive, 112 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x4001776360, 0x40000823f0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 831
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 4048 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff660, {{0x36f42d0, 0x40001bc080?}, 0x400011cf00?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 4044
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 5290 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 5289
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 5614 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 5613
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 6394 [syscall]:
syscall.Syscall6(0x5f, 0x3, 0x13, 0x40014fec38, 0x4, 0x4002848090, 0x0)
	/usr/local/go/src/syscall/syscall_linux.go:96 +0x2c
internal/syscall/unix.Waitid(0x40014fed98?, 0x1929a0?, 0xfffff656e1a1?, 0x0?, 0x4001fcc0c0?)
	/usr/local/go/src/internal/syscall/unix/waitid_linux.go:18 +0x44
os.(*Process).pidfdWait.func1(...)
	/usr/local/go/src/os/pidfd_linux.go:109
os.ignoringEINTR(...)
	/usr/local/go/src/os/file_posix.go:256
os.(*Process).pidfdWait(0x4000736b00)
	/usr/local/go/src/os/pidfd_linux.go:108 +0x144
os.(*Process).wait(0x40014fed68?)
	/usr/local/go/src/os/exec_unix.go:25 +0x24
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:340
os/exec.(*Cmd).Wait(0x4001f68000)
	/usr/local/go/src/os/exec/exec.go:922 +0x38
os/exec.(*Cmd).Run(0x4001f68000)
	/usr/local/go/src/os/exec/exec.go:626 +0x38
k8s.io/minikube/test/integration.Run(0x400166a8c0, 0x4001f68000)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:104 +0x154
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1.1(0x400166a8c0)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:112 +0x44
testing.tRunner(0x400166a8c0, 0x4001878150)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 3606
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 6396 [IO wait]:
internal/poll.runtime_pollWait(0xffff548fb000, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0xa0
internal/poll.(*pollDesc).wait(0x4002840240?, 0x400161414f?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x28
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0x4002840240, {0x400161414f, 0xbeb1, 0xbeb1})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x1e0
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0x40000a6260, {0x400161414f?, 0x40012fbd68?, 0x8b27c?})
	/usr/local/go/src/os/file.go:144 +0x68
bytes.(*Buffer).ReadFrom(0x4001878420, {0x369c8e8, 0x40002ce218})
	/usr/local/go/src/bytes/buffer.go:217 +0x90
io.copyBuffer({0x369cae0, 0x4001878420}, {0x369c8e8, 0x40002ce218}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x14c
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x40000a6260?, {0x369cae0, 0x4001878420})
	/usr/local/go/src/os/file.go:295 +0x58
os.(*File).WriteTo(0x40000a6260, {0x369cae0, 0x4001878420})
	/usr/local/go/src/os/file.go:273 +0x9c
io.copyBuffer({0x369cae0, 0x4001878420}, {0x369c968, 0x40000a6260}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x98
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:596 +0x40
os/exec.(*Cmd).Start.func2(0x40013ffdc0?)
	/usr/local/go/src/os/exec/exec.go:749 +0x30
created by os/exec.(*Cmd).Start in goroutine 6394
	/usr/local/go/src/os/exec/exec.go:748 +0x6a4

                                                
                                                
goroutine 5285 [chan receive, 7 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x4004f09020, 0x40000823f0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 5260
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 186 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e69b0, 0x40000823f0}, 0x40012f5740, 0x40007baf88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e69b0, 0x40000823f0}, 0x86?, 0x40012f5740, 0x40012f5788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e69b0?, 0x40000823f0?}, 0x0?, 0x40012f5750?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x36f42d0?, 0x40001bc080?, 0x4000690300?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 166
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 1320 [IO wait, 110 minutes]:
internal/poll.runtime_pollWait(0xffff548fba00, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0xa0
internal/poll.(*pollDesc).wait(0x400154d980?, 0xdbd0c?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x28
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0x400154d980)
	/usr/local/go/src/internal/poll/fd_unix.go:613 +0x21c
net.(*netFD).accept(0x400154d980)
	/usr/local/go/src/net/fd_unix.go:161 +0x28
net.(*TCPListener).accept(0x4004ed1e40)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x24
net.(*TCPListener).Accept(0x4004ed1e40)
	/usr/local/go/src/net/tcpsock.go:380 +0x2c
net/http.(*Server).Serve(0x4001891d00, {0x36d4000, 0x4004ed1e40})
	/usr/local/go/src/net/http/server.go:3463 +0x24c
net/http.(*Server).ListenAndServe(0x4001891d00)
	/usr/local/go/src/net/http/server.go:3389 +0x80
k8s.io/minikube/test/integration.startHTTPProxy.func1(...)
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2218
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 1318
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2217 +0x104

                                                
                                                
goroutine 166 [chan receive, 117 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x4004f08a20, 0x40000823f0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 178
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 4032 [select, 1 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e69b0, 0x40000823f0}, 0x40000a5740, 0x40013c2f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e69b0, 0x40000823f0}, 0x8?, 0x40000a5740, 0x40000a5788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e69b0?, 0x40000823f0?}, 0x400067c870?, 0x400144fb90?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x95c64?, 0x4000690780?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 4049
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 1935 [chan send, 80 minutes]:
os/exec.(*Cmd).watchCtx(0x400133a180, 0x4004f1f5e0)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 1934
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 4049 [chan receive, 27 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x4002840060, 0x40000823f0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 4044
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 187 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 186
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 3786 [chan receive, 32 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x4002840c00, 0x40000823f0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3781
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 185 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0x4004ed0490, 0x2d)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x4004ed0480)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702b60)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x4004f08a20)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x400072f6c0?, 0x0?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e69b0?, 0x40000823f0?}, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e69b0, 0x40000823f0}, 0x4004f45f38, {0x369e520, 0x40013668d0}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x0?, {0x369e520?, 0x40013668d0?}, 0x0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4001408750, 0x3b9aca00, 0x0, 0x1, 0x40000823f0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 166
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 165 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff660, {{0x36f42d0, 0x40001bc080?}, 0x4000690300?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 178
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 6257 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 6208
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 1227 [select, 110 minutes]:
net/http.(*persistConn).readLoop(0x40014f4c60)
	/usr/local/go/src/net/http/transport.go:2398 +0xa6c
created by net/http.(*Transport).dialConn in goroutine 1225
	/usr/local/go/src/net/http/transport.go:1947 +0x111c

                                                
                                                
goroutine 866 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0x4001ddc310, 0x2c)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x4001ddc300)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702b60)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x4001776360)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x40014f1880?, 0x0?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e69b0?, 0x40000823f0?}, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e69b0, 0x40000823f0}, 0x40007b8f38, {0x369e520, 0x400078ee10}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x0?, {0x369e520?, 0x400078ee10?}, 0x50?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4001471260, 0x3b9aca00, 0x0, 0x1, 0x40000823f0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 861
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 3765 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3764
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 5597 [chan receive, 4 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x4004f74420, 0x40000823f0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 5595
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 1923 [chan send, 80 minutes]:
os/exec.(*Cmd).watchCtx(0x4000691500, 0x4004f1ecb0)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 1922
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 1095 [chan send, 110 minutes]:
os/exec.(*Cmd).watchCtx(0x4001a63b00, 0x40014f0e70)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 1094
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 3233 [chan receive, 34 minutes]:
testing.(*T).Run(0x4004e5a540, {0x296d71f?, 0x13a64202bdd4?}, 0x4001324228)
	/usr/local/go/src/testing/testing.go:2005 +0x378
k8s.io/minikube/test/integration.TestNetworkPlugins(0x4004e5a540)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:52 +0xe4
testing.tRunner(0x4004e5a540, 0x339baf0)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 1228 [select, 110 minutes]:
net/http.(*persistConn).writeLoop(0x40014f4c60)
	/usr/local/go/src/net/http/transport.go:2600 +0x94
created by net/http.(*Transport).dialConn in goroutine 1225
	/usr/local/go/src/net/http/transport.go:1948 +0x1164

                                                
                                                
goroutine 867 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e69b0, 0x40000823f0}, 0x40018fff40, 0x40014fdf88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e69b0, 0x40000823f0}, 0x96?, 0x40018fff40, 0x40018fff88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e69b0?, 0x40000823f0?}, 0x0?, 0x40018fff50?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x36f42d0?, 0x40001bc080?, 0x1?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 861
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 5918 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 5917
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 695 [IO wait, 114 minutes]:
internal/poll.runtime_pollWait(0xffff548fbc00, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0xa0
internal/poll.(*pollDesc).wait(0x400041f280?, 0x2d970?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x28
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0x400041f280)
	/usr/local/go/src/internal/poll/fd_unix.go:613 +0x21c
net.(*netFD).accept(0x400041f280)
	/usr/local/go/src/net/fd_unix.go:161 +0x28
net.(*TCPListener).accept(0x40008f3280)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x24
net.(*TCPListener).Accept(0x40008f3280)
	/usr/local/go/src/net/tcpsock.go:380 +0x2c
net/http.(*Server).Serve(0x400027ea00, {0x36d4000, 0x40008f3280})
	/usr/local/go/src/net/http/server.go:3463 +0x24c
net/http.(*Server).ListenAndServe(0x400027ea00)
	/usr/local/go/src/net/http/server.go:3389 +0x80
k8s.io/minikube/test/integration.startHTTPProxy.func1(...)
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2218
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 693
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2217 +0x104

                                                
                                                
goroutine 2019 [chan send, 80 minutes]:
os/exec.(*Cmd).watchCtx(0x4000690600, 0x40016b89a0)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 1497
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 868 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 867
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 3785 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff660, {{0x36f42d0, 0x40001bc080?}, 0x4002774d80?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 3781
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 1111 [chan send, 110 minutes]:
os/exec.(*Cmd).watchCtx(0x4001544480, 0x40014f1730)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 822
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 4289 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff660, {{0x36f42d0, 0x40001bc080?}, 0x40012ece00?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 4288
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 1007 [chan send, 110 minutes]:
os/exec.(*Cmd).watchCtx(0x400011d680, 0x400072f110)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 1006
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 3467 [chan receive]:
testing.(*testState).waitParallel(0x4000070ff0)
	/usr/local/go/src/testing/testing.go:2116 +0x158
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1906 +0x4c4
testing.tRunner(0x40013e56c0, 0x339bd20)
	/usr/local/go/src/testing/testing.go:1940 +0x104
created by testing.(*T).Run in goroutine 3276
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 860 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff660, {{0x36f42d0, 0x40001bc080?}, 0x1?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 831
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 4293 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e69b0, 0x40000823f0}, 0x400009f740, 0x400009f788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e69b0, 0x40000823f0}, 0x80?, 0x400009f740, 0x400009f788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e69b0?, 0x40000823f0?}, 0x0?, 0x95c64?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x4000777a00?, 0x95c64?, 0x4002774000?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 4290
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 6410 [select]:
os/exec.(*Cmd).watchCtx(0x4001f68900, 0x4001de4850)
	/usr/local/go/src/os/exec/exec.go:789 +0x70
created by os/exec.(*Cmd).Start in goroutine 6407
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 5613 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e69b0, 0x40000823f0}, 0x4001b64f40, 0x4001b64f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e69b0, 0x40000823f0}, 0x7c?, 0x4001b64f40, 0x4001b64f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e69b0?, 0x40000823f0?}, 0x0?, 0x4001b64f50?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x36f42d0?, 0x40001bc080?, 0x400143cc40?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 5597
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 3764 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e69b0, 0x40000823f0}, 0x40007bdf40, 0x40007bdf88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e69b0, 0x40000823f0}, 0xd0?, 0x40007bdf40, 0x40007bdf88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e69b0?, 0x40000823f0?}, 0x7273752f203e2d2d?, 0x632f65726168732f?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x95c64?, 0x4002775800?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 3786
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 5912 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff660, {{0x36f42d0, 0x40001bc080?}, 0x40013ff340?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 5911
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 4294 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 4293
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 1559 [chan receive, 82 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x4001351bc0, 0x40000823f0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 1557
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 3276 [chan receive, 38 minutes]:
testing.(*T).Run(0x4004e5b6c0, {0x296d71f?, 0x4001a3ef58?}, 0x339bd20)
	/usr/local/go/src/testing/testing.go:2005 +0x378
k8s.io/minikube/test/integration.TestStartStop(0x4004e5b6c0)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:46 +0x3c
testing.tRunner(0x4004e5b6c0, 0x339bb38)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 6409 [IO wait]:
internal/poll.runtime_pollWait(0xffff548fb600, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0xa0
internal/poll.(*pollDesc).wait(0x4002840c60?, 0x40015c4760?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x28
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0x4002840c60, {0x40015c4760, 0xb8a0, 0xb8a0})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x1e0
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0x40000a6508, {0x40015c4760?, 0x4001b68d68?, 0x8b27c?})
	/usr/local/go/src/os/file.go:144 +0x68
bytes.(*Buffer).ReadFrom(0x4001878a80, {0x369c8e8, 0x40002ce588})
	/usr/local/go/src/bytes/buffer.go:217 +0x90
io.copyBuffer({0x369cae0, 0x4001878a80}, {0x369c8e8, 0x40002ce588}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x14c
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x40000a6508?, {0x369cae0, 0x4001878a80})
	/usr/local/go/src/os/file.go:295 +0x58
os.(*File).WriteTo(0x40000a6508, {0x369cae0, 0x4001878a80})
	/usr/local/go/src/os/file.go:273 +0x9c
io.copyBuffer({0x369cae0, 0x4001878a80}, {0x369c968, 0x40000a6508}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x98
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:596 +0x40
os/exec.(*Cmd).Start.func2(0x40019de900?)
	/usr/local/go/src/os/exec/exec.go:749 +0x30
created by os/exec.(*Cmd).Start in goroutine 6407
	/usr/local/go/src/os/exec/exec.go:748 +0x6a4

                                                
                                                
goroutine 6241 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff660, {{0x36f42d0, 0x40001bc080?}, 0x4000010120?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 6237
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 3763 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0x40008f3210, 0x17)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x40008f3200)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702b60)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x4002840c00)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x40014f0c40?, 0x1618bc?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e69b0?, 0x40000823f0?}, 0x40016256a8?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e69b0, 0x40000823f0}, 0x40007b7f38, {0x369e520, 0x4004ed6120}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x40016257a8?, {0x369e520?, 0x4004ed6120?}, 0xe0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4001408070, 0x3b9aca00, 0x0, 0x1, 0x40000823f0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 3786
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 4292 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0x40006b67d0, 0x2)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x40006b67c0)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702b60)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x40018670e0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x400039cee0?, 0x0?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e69b0?, 0x40000823f0?}, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e69b0, 0x40000823f0}, 0x40007b6f38, {0x369e520, 0x40018ba510}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x36f42d0?, {0x369e520?, 0x40018ba510?}, 0x10?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x40016ab600, 0x3b9aca00, 0x0, 0x1, 0x40000823f0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 4290
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 5596 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff660, {{0x36f42d0, 0x40001bc080?}, 0x400143cc40?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 5595
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 1558 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff660, {{0x36f42d0, 0x40001bc080?}, 0x400159ed80?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 1557
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 1570 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0x4001e7ab90, 0x24)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x4001e7ab80)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702b60)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x4001351bc0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x4000139880?, 0x0?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e69b0?, 0x40000823f0?}, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e69b0, 0x40000823f0}, 0x40015f2f38, {0x369e520, 0x4001fdc7e0}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x36f42d0?, {0x369e520?, 0x4001fdc7e0?}, 0x60?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x40016ab9a0, 0x3b9aca00, 0x0, 0x1, 0x40000823f0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 1559
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 1571 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e69b0, 0x40000823f0}, 0x40012f9740, 0x4001a3ff88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e69b0, 0x40000823f0}, 0x31?, 0x40012f9740, 0x40012f9788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e69b0?, 0x40000823f0?}, 0x36e6618?, 0x40018819d0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x95c64?, 0x400159ef00?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 1559
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 1572 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 1571
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 3515 [chan receive, 7 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1891 +0x3d0
testing.tRunner(0x40014a9180, 0x4001324228)
	/usr/local/go/src/testing/testing.go:1940 +0x104
created by testing.(*T).Run in goroutine 3233
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 6408 [IO wait]:
internal/poll.runtime_pollWait(0xffff54485200, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0xa0
internal/poll.(*pollDesc).wait(0x4002840a80?, 0x400145eada?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x28
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0x4002840a80, {0x400145eada, 0x526, 0x526})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x1e0
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0x40000a64e8, {0x400145eada?, 0x40012f7568?, 0x8b27c?})
	/usr/local/go/src/os/file.go:144 +0x68
bytes.(*Buffer).ReadFrom(0x4001878a50, {0x369c8e8, 0x40002ce578})
	/usr/local/go/src/bytes/buffer.go:217 +0x90
io.copyBuffer({0x369cae0, 0x4001878a50}, {0x369c8e8, 0x40002ce578}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x14c
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x40000a64e8?, {0x369cae0, 0x4001878a50})
	/usr/local/go/src/os/file.go:295 +0x58
os.(*File).WriteTo(0x40000a64e8, {0x369cae0, 0x4001878a50})
	/usr/local/go/src/os/file.go:273 +0x9c
io.copyBuffer({0x369cae0, 0x4001878a50}, {0x369c968, 0x40000a64e8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x98
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:596 +0x40
os/exec.(*Cmd).Start.func2(0x400166a000?)
	/usr/local/go/src/os/exec/exec.go:749 +0x30
created by os/exec.(*Cmd).Start in goroutine 6407
	/usr/local/go/src/os/exec/exec.go:748 +0x6a4

                                                
                                                
goroutine 3605 [chan receive]:
testing.(*T).Run(0x40014a9880, {0x296d724?, 0x368adf0?}, 0x40018788d0)
	/usr/local/go/src/testing/testing.go:2005 +0x378
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0x40014a9880)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:111 +0x4f4
testing.tRunner(0x40014a9880, 0x4000476d00)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 3515
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 3606 [chan receive]:
testing.(*T).Run(0x40014a9c00, {0x296d724?, 0x368adf0?}, 0x4001878150)
	/usr/local/go/src/testing/testing.go:2005 +0x378
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0x40014a9c00)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:111 +0x4f4
testing.tRunner(0x40014a9c00, 0x4000476d80)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 3515
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 4065 [select, 1 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 4032
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 4031 [sync.Cond.Wait, 1 minutes]:
sync.runtime_notifyListWait(0x4004ed1150, 0x16)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x4004ed1140)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702b60)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x4002840060)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x400144eb60?, 0x1618bc?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e69b0?, 0x40000823f0?}, 0x40000a1ea8?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e69b0, 0x40000823f0}, 0x40000d6f38, {0x369e520, 0x40012e9830}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x40000a1fa8?, {0x369e520?, 0x40012e9830?}, 0xb0?, 0x161f90?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x40018845f0, 0x3b9aca00, 0x0, 0x1, 0x40000823f0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 4049
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 5288 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0x4001ddc550, 0xe)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x4001ddc540)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702b60)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x4004f09020)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x40015488c0?, 0x21dd4?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e69b0?, 0x40000823f0?}, 0x40000a5ea8?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e69b0, 0x40000823f0}, 0x40014faf38, {0x369e520, 0x4001fdcd50}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x11?, {0x369e520?, 0x4001fdcd50?}, 0x0?, 0x36e6618?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x400158b2f0, 0x3b9aca00, 0x0, 0x1, 0x40000823f0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 5285
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 5289 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e69b0, 0x40000823f0}, 0x40012fb740, 0x40012fb788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e69b0, 0x40000823f0}, 0x53?, 0x40012fb740, 0x40012fb788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e69b0?, 0x40000823f0?}, 0x0?, 0x40012fb750?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x36f42d0?, 0x40001bc080?, 0x4001413e00?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 5285
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 5916 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0x40017da550, 0x0)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x40017da540)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702b60)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x4001866de0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x40002b1420?, 0x0?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e69b0?, 0x40000823f0?}, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e69b0, 0x40000823f0}, 0x40013c4f38, {0x369e520, 0x4001878510}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x36f42d0?, {0x369e520?, 0x4001878510?}, 0x50?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x40019c3ee0, 0x3b9aca00, 0x0, 0x1, 0x40000823f0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 5913
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 5612 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0x40008f3e50, 0x0)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x40008f3e40)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702b60)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x4004f74420)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x40017095e0?, 0x21dd4?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e69b0?, 0x40000823f0?}, 0x40016246a8?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e69b0, 0x40000823f0}, 0x40013c3f38, {0x369e520, 0x4004ed6300}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x11?, {0x369e520?, 0x4004ed6300?}, 0xe0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4001a03ee0, 0x3b9aca00, 0x0, 0x1, 0x40000823f0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 5597
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 6395 [IO wait]:
internal/poll.runtime_pollWait(0xffff54484800, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0xa0
internal/poll.(*pollDesc).wait(0x4002840180?, 0x40006a5ab6?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x28
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0x4002840180, {0x40006a5ab6, 0x54a, 0x54a})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x1e0
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0x40000a6210, {0x40006a5ab6?, 0x4001b62568?, 0x8b27c?})
	/usr/local/go/src/os/file.go:144 +0x68
bytes.(*Buffer).ReadFrom(0x40018783f0, {0x369c8e8, 0x40002ce1d8})
	/usr/local/go/src/bytes/buffer.go:217 +0x90
io.copyBuffer({0x369cae0, 0x40018783f0}, {0x369c8e8, 0x40002ce1d8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x14c
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x40000a6210?, {0x369cae0, 0x40018783f0})
	/usr/local/go/src/os/file.go:295 +0x58
os.(*File).WriteTo(0x40000a6210, {0x369cae0, 0x40018783f0})
	/usr/local/go/src/os/file.go:273 +0x9c
io.copyBuffer({0x369cae0, 0x40018783f0}, {0x369c968, 0x40000a6210}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x98
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:596 +0x40
os/exec.(*Cmd).Start.func2(0x400166a8c0?)
	/usr/local/go/src/os/exec/exec.go:749 +0x30
created by os/exec.(*Cmd).Start in goroutine 6394
	/usr/local/go/src/os/exec/exec.go:748 +0x6a4

                                                
                                                
goroutine 6407 [syscall]:
syscall.Syscall6(0x5f, 0x3, 0x14, 0x40013c7c38, 0x4, 0x4002848990, 0x0)
	/usr/local/go/src/syscall/syscall_linux.go:96 +0x2c
internal/syscall/unix.Waitid(0x40013c7d98?, 0x1929a0?, 0xfffff656e1a1?, 0x0?, 0x4001fcc480?)
	/usr/local/go/src/internal/syscall/unix/waitid_linux.go:18 +0x44
os.(*Process).pidfdWait.func1(...)
	/usr/local/go/src/os/pidfd_linux.go:109
os.ignoringEINTR(...)
	/usr/local/go/src/os/file_posix.go:256
os.(*Process).pidfdWait(0x4000736d00)
	/usr/local/go/src/os/pidfd_linux.go:108 +0x144
os.(*Process).wait(0x40013c7d68?)
	/usr/local/go/src/os/exec_unix.go:25 +0x24
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:340
os/exec.(*Cmd).Wait(0x4001f68900)
	/usr/local/go/src/os/exec/exec.go:922 +0x38
os/exec.(*Cmd).Run(0x4001f68900)
	/usr/local/go/src/os/exec/exec.go:626 +0x38
k8s.io/minikube/test/integration.Run(0x400166a000, 0x4001f68900)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:104 +0x154
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1.1(0x400166a000)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:112 +0x44
testing.tRunner(0x400166a000, 0x40018788d0)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 3605
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 6207 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0x4001e7bbd0, 0x0)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x4001e7bbc0)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702b60)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x4001f9c2a0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x40014f0150?, 0x21dd4?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e69b0?, 0x40000823f0?}, 0x40012f4ea8?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e69b0, 0x40000823f0}, 0x40000d2f38, {0x369e520, 0x4001e29020}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x11?, {0x369e520?, 0x4001e29020?}, 0x1?, 0x36e6618?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4001e1f530, 0x3b9aca00, 0x0, 0x1, 0x40000823f0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 6242
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 5913 [chan receive, 2 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x4001866de0, 0x40000823f0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 5911
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 6208 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e69b0, 0x40000823f0}, 0x40016bf740, 0x40016bf788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e69b0, 0x40000823f0}, 0xe8?, 0x40016bf740, 0x40016bf788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e69b0?, 0x40000823f0?}, 0x4000010050?, 0xc?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x95c64?, 0x4001f68480?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 6242
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 4290 [chan receive, 13 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x40018670e0, 0x40000823f0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 4288
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 6397 [select]:
os/exec.(*Cmd).watchCtx(0x4001f68000, 0x4001de41c0)
	/usr/local/go/src/os/exec/exec.go:789 +0x70
created by os/exec.(*Cmd).Start in goroutine 6394
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 5917 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e69b0, 0x40000823f0}, 0x4001b67740, 0x4001b67788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e69b0, 0x40000823f0}, 0x0?, 0x4001b67740, 0x4001b67788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e69b0?, 0x40000823f0?}, 0x36e6618?, 0x40016b8700?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x40016b8620?, 0x0?, 0x400011c780?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 5913
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 6242 [chan receive, 2 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x4001f9c2a0, 0x40000823f0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 6237
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                    

Test pass (240/316)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 10.01
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.43
9 TestDownloadOnly/v1.28.0/DeleteAll 0.38
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.23
12 TestDownloadOnly/v1.34.2/json-events 3.59
13 TestDownloadOnly/v1.34.2/preload-exists 0
17 TestDownloadOnly/v1.34.2/LogsDuration 0.08
18 TestDownloadOnly/v1.34.2/DeleteAll 0.21
19 TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds 0.14
21 TestDownloadOnly/v1.35.0-beta.0/json-events 3.11
22 TestDownloadOnly/v1.35.0-beta.0/preload-exists 0
26 TestDownloadOnly/v1.35.0-beta.0/LogsDuration 0.09
27 TestDownloadOnly/v1.35.0-beta.0/DeleteAll 0.21
28 TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds 0.14
30 TestBinaryMirror 0.65
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
36 TestAddons/Setup 140.62
40 TestAddons/serial/GCPAuth/Namespaces 0.22
41 TestAddons/serial/GCPAuth/FakeCredentials 8.81
57 TestAddons/StoppedEnableDisable 12.41
58 TestCertOptions 41.12
59 TestCertExpiration 248.08
61 TestForceSystemdFlag 39.9
62 TestForceSystemdEnv 38.92
67 TestErrorSpam/setup 31.23
68 TestErrorSpam/start 0.8
69 TestErrorSpam/status 1.11
70 TestErrorSpam/pause 6.47
71 TestErrorSpam/unpause 5.23
72 TestErrorSpam/stop 1.53
75 TestFunctional/serial/CopySyncFile 0
76 TestFunctional/serial/StartWithProxy 78.19
77 TestFunctional/serial/AuditLog 0
78 TestFunctional/serial/SoftStart 29.14
79 TestFunctional/serial/KubeContext 0.07
80 TestFunctional/serial/KubectlGetPods 0.11
83 TestFunctional/serial/CacheCmd/cache/add_remote 3.53
84 TestFunctional/serial/CacheCmd/cache/add_local 1.29
85 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
86 TestFunctional/serial/CacheCmd/cache/list 0.07
87 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
88 TestFunctional/serial/CacheCmd/cache/cache_reload 1.85
89 TestFunctional/serial/CacheCmd/cache/delete 0.12
90 TestFunctional/serial/MinikubeKubectlCmd 0.13
91 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
92 TestFunctional/serial/ExtraConfig 36.66
93 TestFunctional/serial/ComponentHealth 0.1
94 TestFunctional/serial/LogsCmd 1.49
95 TestFunctional/serial/LogsFileCmd 1.48
96 TestFunctional/serial/InvalidService 4.42
98 TestFunctional/parallel/ConfigCmd 0.47
99 TestFunctional/parallel/DashboardCmd 13.46
100 TestFunctional/parallel/DryRun 0.58
101 TestFunctional/parallel/InternationalLanguage 0.33
102 TestFunctional/parallel/StatusCmd 1.47
106 TestFunctional/parallel/ServiceCmdConnect 7.62
107 TestFunctional/parallel/AddonsCmd 0.16
108 TestFunctional/parallel/PersistentVolumeClaim 19.68
110 TestFunctional/parallel/SSHCmd 0.75
111 TestFunctional/parallel/CpCmd 2.21
113 TestFunctional/parallel/FileSync 0.35
114 TestFunctional/parallel/CertSync 2.06
118 TestFunctional/parallel/NodeLabels 0.09
120 TestFunctional/parallel/NonActiveRuntimeDisabled 0.9
122 TestFunctional/parallel/License 0.35
124 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.66
125 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
127 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.32
128 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.09
129 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
133 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
134 TestFunctional/parallel/ServiceCmd/DeployApp 8.21
135 TestFunctional/parallel/ProfileCmd/profile_not_create 0.46
136 TestFunctional/parallel/ProfileCmd/profile_list 0.44
137 TestFunctional/parallel/ProfileCmd/profile_json_output 0.44
138 TestFunctional/parallel/MountCmd/any-port 8.36
139 TestFunctional/parallel/ServiceCmd/List 0.53
140 TestFunctional/parallel/ServiceCmd/JSONOutput 0.51
141 TestFunctional/parallel/ServiceCmd/HTTPS 0.38
142 TestFunctional/parallel/ServiceCmd/Format 0.42
143 TestFunctional/parallel/ServiceCmd/URL 0.44
144 TestFunctional/parallel/MountCmd/specific-port 2.16
145 TestFunctional/parallel/MountCmd/VerifyCleanup 1.9
146 TestFunctional/parallel/Version/short 0.09
147 TestFunctional/parallel/Version/components 0.8
148 TestFunctional/parallel/ImageCommands/ImageListShort 0.33
149 TestFunctional/parallel/ImageCommands/ImageListTable 0.25
150 TestFunctional/parallel/ImageCommands/ImageListJson 0.28
151 TestFunctional/parallel/ImageCommands/ImageListYaml 0.25
152 TestFunctional/parallel/ImageCommands/ImageBuild 4.1
153 TestFunctional/parallel/ImageCommands/Setup 0.65
154 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.33
155 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.95
156 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 3.26
157 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.47
158 TestFunctional/parallel/ImageCommands/ImageRemove 0.67
159 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.68
160 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.44
161 TestFunctional/parallel/UpdateContextCmd/no_changes 0.17
162 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.15
163 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.16
164 TestFunctional/delete_echo-server_images 0.04
165 TestFunctional/delete_my-image_image 0.02
166 TestFunctional/delete_minikube_cached_images 0.01
170 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile 0
172 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog 0
174 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext 0.06
178 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote 3.49
179 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local 1.08
180 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete 0.06
181 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list 0.06
182 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node 0.34
183 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload 1.9
184 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete 0.13
189 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd 0.95
190 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd 0.95
193 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd 0.49
195 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun 0.44
196 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage 0.2
202 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd 0.15
205 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd 0.71
206 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd 2.18
208 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync 0.29
209 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync 1.71
215 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled 0.57
217 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License 0.27
220 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel 0
227 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel 0.1
234 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create 0.41
235 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list 0.39
236 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output 0.4
238 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port 1.87
239 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup 1.32
240 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short 0.05
241 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components 0.51
242 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort 0.23
243 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable 0.23
244 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson 0.23
245 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml 0.23
246 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild 3.67
247 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup 0.25
248 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon 1.22
249 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon 0.83
250 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon 1.08
251 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile 0.36
252 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove 0.56
253 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile 0.77
254 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon 0.43
255 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes 0.19
256 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster 0.14
257 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters 0.19
258 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images 0.04
259 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image 0.02
260 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images 0.02
264 TestMultiControlPlane/serial/StartCluster 203.53
265 TestMultiControlPlane/serial/DeployApp 6.53
266 TestMultiControlPlane/serial/PingHostFromPods 1.91
267 TestMultiControlPlane/serial/AddWorkerNode 59.22
268 TestMultiControlPlane/serial/NodeLabels 0.12
269 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.11
270 TestMultiControlPlane/serial/CopyFile 20.33
271 TestMultiControlPlane/serial/StopSecondaryNode 12.89
272 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.85
273 TestMultiControlPlane/serial/RestartSecondaryNode 32.68
274 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.35
275 TestMultiControlPlane/serial/RestartClusterKeepsNodes 138.42
276 TestMultiControlPlane/serial/DeleteSecondaryNode 12.14
277 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.78
278 TestMultiControlPlane/serial/StopCluster 36.14
279 TestMultiControlPlane/serial/RestartCluster 93.48
280 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.82
281 TestMultiControlPlane/serial/AddSecondaryNode 63.54
282 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.05
287 TestJSONOutput/start/Command 79.45
288 TestJSONOutput/start/Audit 0
290 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
291 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
294 TestJSONOutput/pause/Audit 0
296 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
297 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
300 TestJSONOutput/unpause/Audit 0
302 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
303 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
305 TestJSONOutput/stop/Command 5.92
306 TestJSONOutput/stop/Audit 0
308 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
309 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
310 TestErrorJSONOutput 0.25
312 TestKicCustomNetwork/create_custom_network 39.17
313 TestKicCustomNetwork/use_default_bridge_network 35.86
314 TestKicExistingNetwork 34.13
315 TestKicCustomSubnet 38.3
316 TestKicStaticIP 38.42
317 TestMainNoArgs 0.05
318 TestMinikubeProfile 70.83
321 TestMountStart/serial/StartWithMountFirst 8.87
322 TestMountStart/serial/VerifyMountFirst 0.28
323 TestMountStart/serial/StartWithMountSecond 9.11
324 TestMountStart/serial/VerifyMountSecond 0.29
325 TestMountStart/serial/DeleteFirst 1.72
326 TestMountStart/serial/VerifyMountPostDelete 0.29
327 TestMountStart/serial/Stop 1.32
328 TestMountStart/serial/RestartStopped 7.97
329 TestMountStart/serial/VerifyMountPostStop 0.29
332 TestMultiNode/serial/FreshStart2Nodes 139.25
333 TestMultiNode/serial/DeployApp2Nodes 4.85
334 TestMultiNode/serial/PingHostFrom2Pods 0.94
335 TestMultiNode/serial/AddNode 59.47
336 TestMultiNode/serial/MultiNodeLabels 0.09
337 TestMultiNode/serial/ProfileList 0.73
338 TestMultiNode/serial/CopyFile 10.87
339 TestMultiNode/serial/StopNode 2.49
340 TestMultiNode/serial/StartAfterStop 8.18
341 TestMultiNode/serial/RestartKeepsNodes 78.44
342 TestMultiNode/serial/DeleteNode 5.67
343 TestMultiNode/serial/StopMultiNode 24.05
344 TestMultiNode/serial/RestartMultiNode 58.5
345 TestMultiNode/serial/ValidateNameConflict 30.39
350 TestPreload 119.61
352 TestScheduledStopUnix 111.54
355 TestInsufficientStorage 12.87
356 TestRunningBinaryUpgrade 304.02
359 TestMissingContainerUpgrade 127.3
361 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
362 TestNoKubernetes/serial/StartWithK8s 44.77
363 TestNoKubernetes/serial/StartWithStopK8s 7.37
364 TestNoKubernetes/serial/Start 8.52
365 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
366 TestNoKubernetes/serial/VerifyK8sNotRunning 0.44
367 TestNoKubernetes/serial/ProfileList 6.05
368 TestNoKubernetes/serial/Stop 1.49
369 TestNoKubernetes/serial/StartNoArgs 7.17
370 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.28
371 TestStoppedBinaryUpgrade/Setup 0.85
372 TestStoppedBinaryUpgrade/Upgrade 303.78
373 TestStoppedBinaryUpgrade/MinikubeLogs 1.75
382 TestPause/serial/Start 82.56
383 TestPause/serial/SecondStartNoReconfiguration 28.32
x
+
TestDownloadOnly/v1.28.0/json-events (10.01s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-660868 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-660868 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (10.005892598s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (10.01s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1213 10:11:05.572009  907484 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1213 10:11:05.572085  907484 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22128-904040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.43s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-660868
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-660868: exit status 85 (430.033202ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-660868 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-660868 │ jenkins │ v1.37.0 │ 13 Dec 25 10:10 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 10:10:55
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 10:10:55.606512  907490 out.go:360] Setting OutFile to fd 1 ...
	I1213 10:10:55.606720  907490 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:10:55.606747  907490 out.go:374] Setting ErrFile to fd 2...
	I1213 10:10:55.606766  907490 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:10:55.607058  907490 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-904040/.minikube/bin
	W1213 10:10:55.607225  907490 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22128-904040/.minikube/config/config.json: open /home/jenkins/minikube-integration/22128-904040/.minikube/config/config.json: no such file or directory
	I1213 10:10:55.607686  907490 out.go:368] Setting JSON to true
	I1213 10:10:55.608603  907490 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":17605,"bootTime":1765603051,"procs":165,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1213 10:10:55.608699  907490 start.go:143] virtualization:  
	I1213 10:10:55.613932  907490 out.go:99] [download-only-660868] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1213 10:10:55.614142  907490 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/22128-904040/.minikube/cache/preloaded-tarball: no such file or directory
	I1213 10:10:55.614266  907490 notify.go:221] Checking for updates...
	I1213 10:10:55.618354  907490 out.go:171] MINIKUBE_LOCATION=22128
	I1213 10:10:55.621690  907490 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 10:10:55.625012  907490 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22128-904040/kubeconfig
	I1213 10:10:55.628136  907490 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-904040/.minikube
	I1213 10:10:55.631349  907490 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1213 10:10:55.637484  907490 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1213 10:10:55.637909  907490 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 10:10:55.660700  907490 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 10:10:55.660814  907490 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:10:55.726013  907490 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-12-13 10:10:55.716107193 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:10:55.726135  907490 docker.go:319] overlay module found
	I1213 10:10:55.729284  907490 out.go:99] Using the docker driver based on user configuration
	I1213 10:10:55.729326  907490 start.go:309] selected driver: docker
	I1213 10:10:55.729342  907490 start.go:927] validating driver "docker" against <nil>
	I1213 10:10:55.729447  907490 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:10:55.783495  907490 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-12-13 10:10:55.774439554 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:10:55.783652  907490 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1213 10:10:55.783944  907490 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1213 10:10:55.784114  907490 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1213 10:10:55.787416  907490 out.go:171] Using Docker driver with root privileges
	I1213 10:10:55.790572  907490 cni.go:84] Creating CNI manager for ""
	I1213 10:10:55.790639  907490 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 10:10:55.790652  907490 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1213 10:10:55.790745  907490 start.go:353] cluster config:
	{Name:download-only-660868 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-660868 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:10:55.793837  907490 out.go:99] Starting "download-only-660868" primary control-plane node in "download-only-660868" cluster
	I1213 10:10:55.793856  907490 cache.go:134] Beginning downloading kic base image for docker with crio
	I1213 10:10:55.796703  907490 out.go:99] Pulling base image v0.0.48-1765275396-22083 ...
	I1213 10:10:55.796735  907490 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1213 10:10:55.796772  907490 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 10:10:55.816183  907490 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 10:10:55.816203  907490 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f to local cache
	I1213 10:10:55.816380  907490 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local cache directory
	I1213 10:10:55.816485  907490 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f to local cache
	I1213 10:10:55.852682  907490 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1213 10:10:55.852713  907490 cache.go:65] Caching tarball of preloaded images
	I1213 10:10:55.852900  907490 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1213 10:10:55.856235  907490 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1213 10:10:55.856268  907490 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1213 10:10:55.938373  907490 preload.go:295] Got checksum from GCS API "e092595ade89dbfc477bd4cd6b9c633b"
	I1213 10:10:55.938498  907490 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:e092595ade89dbfc477bd4cd6b9c633b -> /home/jenkins/minikube-integration/22128-904040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1213 10:10:58.847194  907490 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1213 10:10:58.847663  907490 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/download-only-660868/config.json ...
	I1213 10:10:58.847705  907490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/download-only-660868/config.json: {Name:mk68fe05f1007082aa7c6daa8d5a609a781b9ceb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:10:58.847904  907490 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1213 10:10:58.848100  907490 download.go:108] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/22128-904040/.minikube/cache/linux/arm64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-660868 host does not exist
	  To start a cluster, run: "minikube start -p download-only-660868"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.43s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.38s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.38s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-660868
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/json-events (3.59s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-083474 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-083474 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio: (3.588049015s)
--- PASS: TestDownloadOnly/v1.34.2/json-events (3.59s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/preload-exists
I1213 10:11:10.210510  907484 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
I1213 10:11:10.210546  907484 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22128-904040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-083474
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-083474: exit status 85 (80.649458ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-660868 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-660868 │ jenkins │ v1.37.0 │ 13 Dec 25 10:10 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 13 Dec 25 10:11 UTC │ 13 Dec 25 10:11 UTC │
	│ delete  │ -p download-only-660868                                                                                                                                                   │ download-only-660868 │ jenkins │ v1.37.0 │ 13 Dec 25 10:11 UTC │ 13 Dec 25 10:11 UTC │
	│ start   │ -o=json --download-only -p download-only-083474 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-083474 │ jenkins │ v1.37.0 │ 13 Dec 25 10:11 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 10:11:06
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 10:11:06.668307  907688 out.go:360] Setting OutFile to fd 1 ...
	I1213 10:11:06.668640  907688 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:11:06.668668  907688 out.go:374] Setting ErrFile to fd 2...
	I1213 10:11:06.668688  907688 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:11:06.669009  907688 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-904040/.minikube/bin
	I1213 10:11:06.669483  907688 out.go:368] Setting JSON to true
	I1213 10:11:06.670413  907688 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":17616,"bootTime":1765603051,"procs":158,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1213 10:11:06.670559  907688 start.go:143] virtualization:  
	I1213 10:11:06.715297  907688 out.go:99] [download-only-083474] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 10:11:06.715633  907688 notify.go:221] Checking for updates...
	I1213 10:11:06.745631  907688 out.go:171] MINIKUBE_LOCATION=22128
	I1213 10:11:06.776585  907688 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 10:11:06.809451  907688 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22128-904040/kubeconfig
	I1213 10:11:06.841556  907688 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-904040/.minikube
	I1213 10:11:06.873752  907688 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1213 10:11:06.939365  907688 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1213 10:11:06.939690  907688 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 10:11:06.962483  907688 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 10:11:06.962605  907688 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:11:07.020705  907688 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-13 10:11:07.011174627 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:11:07.020818  907688 docker.go:319] overlay module found
	I1213 10:11:07.025204  907688 out.go:99] Using the docker driver based on user configuration
	I1213 10:11:07.025250  907688 start.go:309] selected driver: docker
	I1213 10:11:07.025259  907688 start.go:927] validating driver "docker" against <nil>
	I1213 10:11:07.025378  907688 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:11:07.084579  907688 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-13 10:11:07.074913714 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:11:07.084738  907688 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1213 10:11:07.085023  907688 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1213 10:11:07.085182  907688 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1213 10:11:07.089507  907688 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-083474 host does not exist
	  To start a cluster, run: "minikube start -p download-only-083474"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.2/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.2/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-083474
--- PASS: TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/json-events (3.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-708534 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-708534 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (3.113944348s)
--- PASS: TestDownloadOnly/v1.35.0-beta.0/json-events (3.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/preload-exists
I1213 10:11:13.757988  907484 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
I1213 10:11:13.758036  907484 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22128-904040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-708534
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-708534: exit status 85 (87.121469ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                       ARGS                                                                                       │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-660868 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio        │ download-only-660868 │ jenkins │ v1.37.0 │ 13 Dec 25 10:10 UTC │                     │
	│ delete  │ --all                                                                                                                                                                            │ minikube             │ jenkins │ v1.37.0 │ 13 Dec 25 10:11 UTC │ 13 Dec 25 10:11 UTC │
	│ delete  │ -p download-only-660868                                                                                                                                                          │ download-only-660868 │ jenkins │ v1.37.0 │ 13 Dec 25 10:11 UTC │ 13 Dec 25 10:11 UTC │
	│ start   │ -o=json --download-only -p download-only-083474 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio        │ download-only-083474 │ jenkins │ v1.37.0 │ 13 Dec 25 10:11 UTC │                     │
	│ delete  │ --all                                                                                                                                                                            │ minikube             │ jenkins │ v1.37.0 │ 13 Dec 25 10:11 UTC │ 13 Dec 25 10:11 UTC │
	│ delete  │ -p download-only-083474                                                                                                                                                          │ download-only-083474 │ jenkins │ v1.37.0 │ 13 Dec 25 10:11 UTC │ 13 Dec 25 10:11 UTC │
	│ start   │ -o=json --download-only -p download-only-708534 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-708534 │ jenkins │ v1.37.0 │ 13 Dec 25 10:11 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 10:11:10
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 10:11:10.690473  907886 out.go:360] Setting OutFile to fd 1 ...
	I1213 10:11:10.690694  907886 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:11:10.690723  907886 out.go:374] Setting ErrFile to fd 2...
	I1213 10:11:10.690743  907886 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:11:10.690993  907886 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-904040/.minikube/bin
	I1213 10:11:10.691446  907886 out.go:368] Setting JSON to true
	I1213 10:11:10.692351  907886 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":17620,"bootTime":1765603051,"procs":158,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1213 10:11:10.692452  907886 start.go:143] virtualization:  
	I1213 10:11:10.695820  907886 out.go:99] [download-only-708534] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 10:11:10.696146  907886 notify.go:221] Checking for updates...
	I1213 10:11:10.698985  907886 out.go:171] MINIKUBE_LOCATION=22128
	I1213 10:11:10.702052  907886 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 10:11:10.705013  907886 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22128-904040/kubeconfig
	I1213 10:11:10.707984  907886 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-904040/.minikube
	I1213 10:11:10.710843  907886 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1213 10:11:10.716376  907886 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1213 10:11:10.716633  907886 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 10:11:10.738990  907886 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 10:11:10.739100  907886 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:11:10.803237  907886 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-13 10:11:10.793613379 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:11:10.803346  907886 docker.go:319] overlay module found
	I1213 10:11:10.806296  907886 out.go:99] Using the docker driver based on user configuration
	I1213 10:11:10.806336  907886 start.go:309] selected driver: docker
	I1213 10:11:10.806343  907886 start.go:927] validating driver "docker" against <nil>
	I1213 10:11:10.806460  907886 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:11:10.862275  907886 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-13 10:11:10.85311733 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:11:10.862442  907886 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1213 10:11:10.862708  907886 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1213 10:11:10.862870  907886 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1213 10:11:10.865930  907886 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-708534 host does not exist
	  To start a cluster, run: "minikube start -p download-only-708534"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-708534
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.65s)

                                                
                                                
=== RUN   TestBinaryMirror
I1213 10:11:15.085085  907484 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-558778 --alsologtostderr --binary-mirror http://127.0.0.1:34767 --driver=docker  --container-runtime=crio
helpers_test.go:176: Cleaning up "binary-mirror-558778" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-558778
--- PASS: TestBinaryMirror (0.65s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1002: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-054604
addons_test.go:1002: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-054604: exit status 85 (73.896705ms)

                                                
                                                
-- stdout --
	* Profile "addons-054604" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-054604"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1013: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-054604
addons_test.go:1013: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-054604: exit status 85 (74.469758ms)

                                                
                                                
-- stdout --
	* Profile "addons-054604" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-054604"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (140.62s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p addons-054604 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p addons-054604 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m20.620284026s)
--- PASS: TestAddons/Setup (140.62s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.22s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:632: (dbg) Run:  kubectl --context addons-054604 create ns new-namespace
addons_test.go:646: (dbg) Run:  kubectl --context addons-054604 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.22s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (8.81s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:677: (dbg) Run:  kubectl --context addons-054604 create -f testdata/busybox.yaml
addons_test.go:684: (dbg) Run:  kubectl --context addons-054604 create sa gcp-auth-test
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [e0da02d5-4d9f-4b25-ad58-dc8915a3077d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [e0da02d5-4d9f-4b25-ad58-dc8915a3077d] Running
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 8.004129357s
addons_test.go:696: (dbg) Run:  kubectl --context addons-054604 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:708: (dbg) Run:  kubectl --context addons-054604 describe sa gcp-auth-test
addons_test.go:722: (dbg) Run:  kubectl --context addons-054604 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:746: (dbg) Run:  kubectl --context addons-054604 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (8.81s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.41s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-054604
addons_test.go:174: (dbg) Done: out/minikube-linux-arm64 stop -p addons-054604: (12.125885297s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-054604
addons_test.go:182: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-054604
addons_test.go:187: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-054604
--- PASS: TestAddons/StoppedEnableDisable (12.41s)

                                                
                                    
x
+
TestCertOptions (41.12s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-370679 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-370679 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (38.309150287s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-370679 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-370679 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-370679 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:176: Cleaning up "cert-options-370679" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-370679
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-370679: (2.083517283s)
--- PASS: TestCertOptions (41.12s)

                                                
                                    
x
+
TestCertExpiration (248.08s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-242831 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-242831 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (40.243449697s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-242831 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
E1213 11:40:25.726116  907484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-769798/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-242831 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (25.330101186s)
helpers_test.go:176: Cleaning up "cert-expiration-242831" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-242831
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-242831: (2.504709851s)
--- PASS: TestCertExpiration (248.08s)

                                                
                                    
x
+
TestForceSystemdFlag (39.9s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-770062 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-770062 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (36.739551841s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-770062 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:176: Cleaning up "force-systemd-flag-770062" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-770062
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-770062: (2.69480554s)
--- PASS: TestForceSystemdFlag (39.90s)

                                                
                                    
x
+
TestForceSystemdEnv (38.92s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-868514 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-868514 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (36.213640673s)
helpers_test.go:176: Cleaning up "force-systemd-env-868514" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-868514
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-868514: (2.707609445s)
--- PASS: TestForceSystemdEnv (38.92s)

                                                
                                    
x
+
TestErrorSpam/setup (31.23s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-060432 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-060432 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-060432 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-060432 --driver=docker  --container-runtime=crio: (31.230049439s)
--- PASS: TestErrorSpam/setup (31.23s)

                                                
                                    
x
+
TestErrorSpam/start (0.8s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-060432 --log_dir /tmp/nospam-060432 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-060432 --log_dir /tmp/nospam-060432 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-060432 --log_dir /tmp/nospam-060432 start --dry-run
--- PASS: TestErrorSpam/start (0.80s)

                                                
                                    
x
+
TestErrorSpam/status (1.11s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-060432 --log_dir /tmp/nospam-060432 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-060432 --log_dir /tmp/nospam-060432 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-060432 --log_dir /tmp/nospam-060432 status
--- PASS: TestErrorSpam/status (1.11s)

                                                
                                    
x
+
TestErrorSpam/pause (6.47s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-060432 --log_dir /tmp/nospam-060432 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-060432 --log_dir /tmp/nospam-060432 pause: exit status 80 (1.996544853s)

                                                
                                                
-- stdout --
	* Pausing node nospam-060432 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T10:17:29Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-060432 --log_dir /tmp/nospam-060432 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-060432 --log_dir /tmp/nospam-060432 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-060432 --log_dir /tmp/nospam-060432 pause: exit status 80 (2.439858201s)

                                                
                                                
-- stdout --
	* Pausing node nospam-060432 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T10:17:32Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-060432 --log_dir /tmp/nospam-060432 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-060432 --log_dir /tmp/nospam-060432 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-060432 --log_dir /tmp/nospam-060432 pause: exit status 80 (2.032047542s)

                                                
                                                
-- stdout --
	* Pausing node nospam-060432 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T10:17:34Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-060432 --log_dir /tmp/nospam-060432 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (6.47s)

                                                
                                    
x
+
TestErrorSpam/unpause (5.23s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-060432 --log_dir /tmp/nospam-060432 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-060432 --log_dir /tmp/nospam-060432 unpause: exit status 80 (1.643189787s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-060432 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T10:17:35Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-060432 --log_dir /tmp/nospam-060432 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-060432 --log_dir /tmp/nospam-060432 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-060432 --log_dir /tmp/nospam-060432 unpause: exit status 80 (2.111071095s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-060432 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T10:17:37Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-060432 --log_dir /tmp/nospam-060432 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-060432 --log_dir /tmp/nospam-060432 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-060432 --log_dir /tmp/nospam-060432 unpause: exit status 80 (1.477577344s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-060432 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T10:17:39Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-060432 --log_dir /tmp/nospam-060432 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (5.23s)

                                                
                                    
x
+
TestErrorSpam/stop (1.53s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-060432 --log_dir /tmp/nospam-060432 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-060432 --log_dir /tmp/nospam-060432 stop: (1.312529504s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-060432 --log_dir /tmp/nospam-060432 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-060432 --log_dir /tmp/nospam-060432 stop
--- PASS: TestErrorSpam/stop (1.53s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22128-904040/.minikube/files/etc/test/nested/copy/907484/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (78.19s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-769798 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E1213 10:18:37.840113  907484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/addons-054604/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:18:37.850709  907484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/addons-054604/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:18:37.862167  907484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/addons-054604/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:18:37.884010  907484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/addons-054604/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:18:37.925527  907484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/addons-054604/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:18:38.014916  907484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/addons-054604/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:18:38.176486  907484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/addons-054604/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:18:38.498264  907484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/addons-054604/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:18:39.140079  907484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/addons-054604/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:18:40.421500  907484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/addons-054604/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:18:42.984454  907484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/addons-054604/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:18:48.106448  907484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/addons-054604/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:18:58.347886  907484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/addons-054604/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-769798 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m18.188486161s)
--- PASS: TestFunctional/serial/StartWithProxy (78.19s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (29.14s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1213 10:19:04.026725  907484 config.go:182] Loaded profile config "functional-769798": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-769798 --alsologtostderr -v=8
E1213 10:19:18.829676  907484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/addons-054604/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-769798 --alsologtostderr -v=8: (29.136925212s)
functional_test.go:678: soft start took 29.137437472s for "functional-769798" cluster.
I1213 10:19:33.163961  907484 config.go:182] Loaded profile config "functional-769798": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/SoftStart (29.14s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.07s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-769798 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.53s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-769798 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-769798 cache add registry.k8s.io/pause:3.1: (1.233723212s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-769798 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-769798 cache add registry.k8s.io/pause:3.3: (1.18472547s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-769798 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-769798 cache add registry.k8s.io/pause:latest: (1.112643175s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.53s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-769798 /tmp/TestFunctionalserialCacheCmdcacheadd_local1375632755/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-769798 cache add minikube-local-cache-test:functional-769798
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-769798 cache delete minikube-local-cache-test:functional-769798
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-769798
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-769798 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.85s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-769798 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-769798 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-769798 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (309.344526ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-769798 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-769798 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.85s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-769798 kubectl -- --context functional-769798 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-769798 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (36.66s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-769798 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1213 10:19:59.791012  907484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/addons-054604/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-769798 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (36.662656922s)
functional_test.go:776: restart took 36.662757641s for "functional-769798" cluster.
I1213 10:20:17.481941  907484 config.go:182] Loaded profile config "functional-769798": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/ExtraConfig (36.66s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-769798 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.49s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-769798 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-769798 logs: (1.494235613s)
--- PASS: TestFunctional/serial/LogsCmd (1.49s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.48s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-769798 logs --file /tmp/TestFunctionalserialLogsFileCmd2128918363/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-769798 logs --file /tmp/TestFunctionalserialLogsFileCmd2128918363/001/logs.txt: (1.476629523s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.48s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.42s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-769798 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-769798
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-769798: exit status 115 (398.694168ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31951 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-769798 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.42s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-769798 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-769798 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-769798 config get cpus: exit status 14 (85.470478ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-769798 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-769798 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-769798 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-769798 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-769798 config get cpus: exit status 14 (69.964418ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (13.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-769798 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-769798 --alsologtostderr -v=1] ...
helpers_test.go:526: unable to kill pid 932683: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (13.46s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-769798 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-769798 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (213.766012ms)

                                                
                                                
-- stdout --
	* [functional-769798] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22128
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22128-904040/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-904040/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 10:20:55.743972  931975 out.go:360] Setting OutFile to fd 1 ...
	I1213 10:20:55.744200  931975 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:20:55.744233  931975 out.go:374] Setting ErrFile to fd 2...
	I1213 10:20:55.744263  931975 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:20:55.744532  931975 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-904040/.minikube/bin
	I1213 10:20:55.744944  931975 out.go:368] Setting JSON to false
	I1213 10:20:55.745929  931975 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":18205,"bootTime":1765603051,"procs":193,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1213 10:20:55.746038  931975 start.go:143] virtualization:  
	I1213 10:20:55.749135  931975 out.go:179] * [functional-769798] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 10:20:55.752996  931975 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 10:20:55.753099  931975 notify.go:221] Checking for updates...
	I1213 10:20:55.758914  931975 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 10:20:55.761869  931975 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22128-904040/kubeconfig
	I1213 10:20:55.764736  931975 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-904040/.minikube
	I1213 10:20:55.767716  931975 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 10:20:55.770608  931975 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 10:20:55.774027  931975 config.go:182] Loaded profile config "functional-769798": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 10:20:55.774647  931975 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 10:20:55.798205  931975 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 10:20:55.798331  931975 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:20:55.874444  931975 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-13 10:20:55.864117866 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:20:55.874554  931975 docker.go:319] overlay module found
	I1213 10:20:55.877655  931975 out.go:179] * Using the docker driver based on existing profile
	I1213 10:20:55.880421  931975 start.go:309] selected driver: docker
	I1213 10:20:55.880456  931975 start.go:927] validating driver "docker" against &{Name:functional-769798 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-769798 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:20:55.880568  931975 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 10:20:55.883986  931975 out.go:203] 
	W1213 10:20:55.886779  931975 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1213 10:20:55.889421  931975 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-769798 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-769798 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-769798 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (324.733451ms)

                                                
                                                
-- stdout --
	* [functional-769798] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22128
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22128-904040/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-904040/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 10:20:55.449875  931855 out.go:360] Setting OutFile to fd 1 ...
	I1213 10:20:55.450085  931855 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:20:55.450115  931855 out.go:374] Setting ErrFile to fd 2...
	I1213 10:20:55.450133  931855 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:20:55.450572  931855 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-904040/.minikube/bin
	I1213 10:20:55.452175  931855 out.go:368] Setting JSON to false
	I1213 10:20:55.455549  931855 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":18205,"bootTime":1765603051,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1213 10:20:55.455775  931855 start.go:143] virtualization:  
	I1213 10:20:55.459251  931855 out.go:179] * [functional-769798] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1213 10:20:55.469815  931855 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 10:20:55.469903  931855 notify.go:221] Checking for updates...
	I1213 10:20:55.478234  931855 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 10:20:55.481973  931855 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22128-904040/kubeconfig
	I1213 10:20:55.488851  931855 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-904040/.minikube
	I1213 10:20:55.491732  931855 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 10:20:55.495814  931855 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 10:20:55.501968  931855 config.go:182] Loaded profile config "functional-769798": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 10:20:55.502710  931855 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 10:20:55.556936  931855 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 10:20:55.557056  931855 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:20:55.658445  931855 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-13 10:20:55.646253714 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:20:55.658553  931855 docker.go:319] overlay module found
	I1213 10:20:55.661960  931855 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1213 10:20:55.664972  931855 start.go:309] selected driver: docker
	I1213 10:20:55.664995  931855 start.go:927] validating driver "docker" against &{Name:functional-769798 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-769798 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:20:55.665116  931855 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 10:20:55.668902  931855 out.go:203] 
	W1213 10:20:55.671825  931855 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1213 10:20:55.674748  931855 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-769798 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-769798 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-769798 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-769798 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-769798 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-7d85dfc575-klvzx" [a477295f-26b4-41e1-880b-99c1b158e8f2] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-7d85dfc575-klvzx" [a477295f-26b4-41e1-880b-99c1b158e8f2] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.003203291s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-arm64 -p functional-769798 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:31338
functional_test.go:1680: http://192.168.49.2:31338: success! body:
Request served by hello-node-connect-7d85dfc575-klvzx

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:31338
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.62s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-769798 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-769798 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (19.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [c5e9c2a4-bcc2-43d5-bbb3-6a80f79f43b1] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003070731s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-769798 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-769798 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-769798 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-769798 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [de5e4c81-5fbf-456c-a6ff-a77e21eddeb2] Pending
helpers_test.go:353: "sp-pod" [de5e4c81-5fbf-456c-a6ff-a77e21eddeb2] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.002903272s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-769798 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-769798 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-769798 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [6e5b2604-2237-408d-b2b8-6bf0c2a403ef] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [6e5b2604-2237-408d-b2b8-6bf0c2a403ef] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.008031249s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-769798 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (19.68s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-769798 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-769798 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-769798 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-769798 ssh -n functional-769798 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-769798 cp functional-769798:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1526269303/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-769798 ssh -n functional-769798 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-769798 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-769798 ssh -n functional-769798 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.21s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/907484/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-769798 ssh "sudo cat /etc/test/nested/copy/907484/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/907484.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-769798 ssh "sudo cat /etc/ssl/certs/907484.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/907484.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-769798 ssh "sudo cat /usr/share/ca-certificates/907484.pem"
2025/12/13 10:21:09 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-769798 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/9074842.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-769798 ssh "sudo cat /etc/ssl/certs/9074842.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/9074842.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-769798 ssh "sudo cat /usr/share/ca-certificates/9074842.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-769798 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.06s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-769798 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-769798 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-769798 ssh "sudo systemctl is-active docker": exit status 1 (383.737117ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-769798 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-769798 ssh "sudo systemctl is-active containerd": exit status 1 (514.090439ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.90s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-769798 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-769798 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-769798 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 929528: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-769798 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-769798 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-769798 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:353: "nginx-svc" [72c4c84b-0a6c-4399-aaf7-138b25fb6a5a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx-svc" [72c4c84b-0a6c-4399-aaf7-138b25fb6a5a] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.003380081s
I1213 10:20:35.422155  907484 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.32s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-769798 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.108.166.7 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-769798 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (8.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-769798 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-769798 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-75c85bcc94-hvstr" [8e9668f4-771b-400a-8d27-53d8b2763f3e] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-75c85bcc94-hvstr" [8e9668f4-771b-400a-8d27-53d8b2763f3e] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.003686197s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (8.21s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "391.016579ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "52.79136ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "376.847725ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "62.381339ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-769798 /tmp/TestFunctionalparallelMountCmdany-port367215312/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765621246742271636" to /tmp/TestFunctionalparallelMountCmdany-port367215312/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765621246742271636" to /tmp/TestFunctionalparallelMountCmdany-port367215312/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765621246742271636" to /tmp/TestFunctionalparallelMountCmdany-port367215312/001/test-1765621246742271636
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-769798 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-769798 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (331.614008ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1213 10:20:47.074154  907484 retry.go:31] will retry after 618.524834ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-769798 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-769798 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 13 10:20 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 13 10:20 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 13 10:20 test-1765621246742271636
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-769798 ssh cat /mount-9p/test-1765621246742271636
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-769798 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [818331a9-fb63-47d9-9e56-dd4560308baf] Pending
helpers_test.go:353: "busybox-mount" [818331a9-fb63-47d9-9e56-dd4560308baf] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [818331a9-fb63-47d9-9e56-dd4560308baf] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [818331a9-fb63-47d9-9e56-dd4560308baf] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.006445088s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-769798 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-769798 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-769798 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-769798 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-769798 /tmp/TestFunctionalparallelMountCmdany-port367215312/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-769798 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-769798 service list -o json
functional_test.go:1504: Took "505.773461ms" to run "out/minikube-linux-arm64 -p functional-769798 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-769798 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:30345
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-769798 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-769798 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:30345
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-769798 /tmp/TestFunctionalparallelMountCmdspecific-port979693275/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-769798 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-769798 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (492.10053ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1213 10:20:55.592601  907484 retry.go:31] will retry after 376.065439ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-769798 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-769798 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-769798 /tmp/TestFunctionalparallelMountCmdspecific-port979693275/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-769798 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-769798 ssh "sudo umount -f /mount-9p": exit status 1 (364.838345ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-769798 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-769798 /tmp/TestFunctionalparallelMountCmdspecific-port979693275/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.16s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-769798 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1078364513/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-769798 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1078364513/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-769798 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1078364513/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-769798 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Done: out/minikube-linux-arm64 -p functional-769798 ssh "findmnt -T" /mount1: (1.051892409s)
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-769798 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-769798 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-769798 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-769798 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1078364513/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-769798 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1078364513/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-769798 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1078364513/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.90s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-769798 version --short
--- PASS: TestFunctional/parallel/Version/short (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-769798 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.80s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-769798 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-769798 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.2
registry.k8s.io/kube-proxy:v1.34.2
registry.k8s.io/kube-controller-manager:v1.34.2
registry.k8s.io/kube-apiserver:v1.34.2
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.12.1
public.ecr.aws/nginx/nginx:alpine
localhost/minikube-local-cache-test:functional-769798
localhost/kicbase/echo-server:functional-769798
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-769798 image ls --format short --alsologtostderr:
I1213 10:21:11.973393  934608 out.go:360] Setting OutFile to fd 1 ...
I1213 10:21:11.973598  934608 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 10:21:11.973629  934608 out.go:374] Setting ErrFile to fd 2...
I1213 10:21:11.973650  934608 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 10:21:11.973957  934608 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-904040/.minikube/bin
I1213 10:21:11.974619  934608 config.go:182] Loaded profile config "functional-769798": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1213 10:21:11.974781  934608 config.go:182] Loaded profile config "functional-769798": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1213 10:21:11.976961  934608 cli_runner.go:164] Run: docker container inspect functional-769798 --format={{.State.Status}}
I1213 10:21:12.008983  934608 ssh_runner.go:195] Run: systemctl --version
I1213 10:21:12.009117  934608 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-769798
I1213 10:21:12.052147  934608 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/functional-769798/id_rsa Username:docker}
I1213 10:21:12.177993  934608 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-769798 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-769798 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 138784d87c9c5 │ 73.2MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ d7b100cd9a77b │ 520kB  │
│ docker.io/kicbase/echo-server           │ latest             │ ce2d2cda2d858 │ 4.79MB │
│ localhost/kicbase/echo-server           │ functional-769798  │ ce2d2cda2d858 │ 4.79MB │
│ public.ecr.aws/nginx/nginx              │ alpine             │ 10afed3caf3ee │ 55.1MB │
│ registry.k8s.io/kube-controller-manager │ v1.34.2            │ 1b34917560f09 │ 72.6MB │
│ registry.k8s.io/kube-proxy              │ v1.34.2            │ 94bff1bec29fd │ 75.9MB │
│ registry.k8s.io/pause                   │ 3.3                │ 3d18732f8686c │ 487kB  │
│ registry.k8s.io/etcd                    │ 3.6.5-0            │ 2c5f0dedd21c2 │ 60.9MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.2            │ b178af3d91f80 │ 84.8MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.2            │ 4f982e73e768a │ 51.6MB │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ b1a8c6f707935 │ 111MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 1611cd07b61d5 │ 3.77MB │
│ localhost/minikube-local-cache-test     │ functional-769798  │ 8f4d1b410c02d │ 3.33kB │
│ registry.k8s.io/pause                   │ 3.1                │ 8057e0500773a │ 529kB  │
│ registry.k8s.io/pause                   │ latest             │ 8cb2091f603e7 │ 246kB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ ba04bb24b9575 │ 29MB   │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-769798 image ls --format table --alsologtostderr:
I1213 10:21:12.816190  934868 out.go:360] Setting OutFile to fd 1 ...
I1213 10:21:12.816370  934868 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 10:21:12.816380  934868 out.go:374] Setting ErrFile to fd 2...
I1213 10:21:12.816386  934868 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 10:21:12.816636  934868 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-904040/.minikube/bin
I1213 10:21:12.817242  934868 config.go:182] Loaded profile config "functional-769798": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1213 10:21:12.817361  934868 config.go:182] Loaded profile config "functional-769798": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1213 10:21:12.817889  934868 cli_runner.go:164] Run: docker container inspect functional-769798 --format={{.State.Status}}
I1213 10:21:12.836182  934868 ssh_runner.go:195] Run: systemctl --version
I1213 10:21:12.836239  934868 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-769798
I1213 10:21:12.860648  934868 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/functional-769798/id_rsa Username:docker}
I1213 10:21:12.968877  934868 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-769798 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-769798 image ls --format json --alsologtostderr:
[{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad04
5384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"73195387"},{"id":"2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534","registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"60857170"},{"id":"b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7","repoDigests":["registry.k8s.io/kube-apiserver@sha256:9a94f333d6fe202d804910534ef052b2cfa650982cdcbe48e92339c8d314dd84","registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.2"],"size":"84753391"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k
8s.io/pause:3.1"],"size":"528622"},{"id":"b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"111333938"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"8f4d1b410c02db353e58f633e741039422bb1298d0f4174987825a4ee14aa43e","repoDigests":["localhost/minikube-local-cache-test@sha256:f1d3606334087689ee570d7af4398c3c146011d3370d3071e72589d72b98048f"],"repoTags":["localhost/minikube-local-cache-test:fu
nctional-769798"],"size":"3330"},{"id":"94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786","repoDigests":["registry.k8s.io/kube-proxy@sha256:20a31b16a001e3e4db71a17ba8effc4b145a3afa2086e844ab40dc5baa5b8d12","registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.2"],"size":"75941783"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b","docker.io/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d
795cb13534d4a","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b","localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a"],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-769798"],"size":"4789170"},{"id":"1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:4b3abd4d4543ac8451f97e9771aa0a29a9958e51ac02fe44900b4a224031df89","registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.2"],"size":"72629077"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f
588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"10afed3caf3eed1b711b8fa0a9600a7b488a45653a15a598a47ac570c1204cc4","repoDigests":["public.ecr.aws/nginx/nginx@sha256:2faa7e87b6fbce823070978247970cea2ad90b1936e84eeae1bd2680b03c168d","public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff"],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"55077248"},{"id":"4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949","repoDigests":["registry.k8s.io/kube-scheduler@sha256:3eff58b308cdc6c65cf030333090e14cc77bea4ed4ea9a92d212a0babc924ffe","registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.2"],"size":"51592021"},{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518
083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"519884"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-769798 image ls --format json --alsologtostderr:
I1213 10:21:12.541427  934802 out.go:360] Setting OutFile to fd 1 ...
I1213 10:21:12.541608  934802 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 10:21:12.541623  934802 out.go:374] Setting ErrFile to fd 2...
I1213 10:21:12.541629  934802 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 10:21:12.541893  934802 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-904040/.minikube/bin
I1213 10:21:12.542661  934802 config.go:182] Loaded profile config "functional-769798": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1213 10:21:12.542785  934802 config.go:182] Loaded profile config "functional-769798": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1213 10:21:12.544097  934802 cli_runner.go:164] Run: docker container inspect functional-769798 --format={{.State.Status}}
I1213 10:21:12.562880  934802 ssh_runner.go:195] Run: systemctl --version
I1213 10:21:12.562953  934802 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-769798
I1213 10:21:12.580859  934802 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/functional-769798/id_rsa Username:docker}
I1213 10:21:12.700259  934802 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-769798 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-769798 image ls --format yaml --alsologtostderr:
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f
repoTags:
- registry.k8s.io/pause:3.10.1
size: "519884"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "111333938"
- id: 8f4d1b410c02db353e58f633e741039422bb1298d0f4174987825a4ee14aa43e
repoDigests:
- localhost/minikube-local-cache-test@sha256:f1d3606334087689ee570d7af4398c3c146011d3370d3071e72589d72b98048f
repoTags:
- localhost/minikube-local-cache-test:functional-769798
size: "3330"
- id: 10afed3caf3eed1b711b8fa0a9600a7b488a45653a15a598a47ac570c1204cc4
repoDigests:
- public.ecr.aws/nginx/nginx@sha256:2faa7e87b6fbce823070978247970cea2ad90b1936e84eeae1bd2680b03c168d
- public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "55077248"
- id: 138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "73195387"
- id: b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:9a94f333d6fe202d804910534ef052b2cfa650982cdcbe48e92339c8d314dd84
- registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.2
size: "84753391"
- id: 94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786
repoDigests:
- registry.k8s.io/kube-proxy@sha256:20a31b16a001e3e4db71a17ba8effc4b145a3afa2086e844ab40dc5baa5b8d12
- registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5
repoTags:
- registry.k8s.io/kube-proxy:v1.34.2
size: "75941783"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b
- docker.io/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b
- localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-769798
size: "4789170"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: 4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:3eff58b308cdc6c65cf030333090e14cc77bea4ed4ea9a92d212a0babc924ffe
- registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.2
size: "51592021"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: 2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
- registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "60857170"
- id: 1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:4b3abd4d4543ac8451f97e9771aa0a29a9958e51ac02fe44900b4a224031df89
- registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.2
size: "72629077"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-769798 image ls --format yaml --alsologtostderr:
I1213 10:21:12.275356  934707 out.go:360] Setting OutFile to fd 1 ...
I1213 10:21:12.275574  934707 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 10:21:12.275598  934707 out.go:374] Setting ErrFile to fd 2...
I1213 10:21:12.275617  934707 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 10:21:12.275890  934707 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-904040/.minikube/bin
I1213 10:21:12.276510  934707 config.go:182] Loaded profile config "functional-769798": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1213 10:21:12.276658  934707 config.go:182] Loaded profile config "functional-769798": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1213 10:21:12.277188  934707 cli_runner.go:164] Run: docker container inspect functional-769798 --format={{.State.Status}}
I1213 10:21:12.297415  934707 ssh_runner.go:195] Run: systemctl --version
I1213 10:21:12.297481  934707 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-769798
I1213 10:21:12.320784  934707 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/functional-769798/id_rsa Username:docker}
I1213 10:21:12.429027  934707 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-769798 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-769798 ssh pgrep buildkitd: exit status 1 (333.12165ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-769798 image build -t localhost/my-image:functional-769798 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-769798 image build -t localhost/my-image:functional-769798 testdata/build --alsologtostderr: (3.525722539s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-arm64 -p functional-769798 image build -t localhost/my-image:functional-769798 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 552b3bb4ecb
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-769798
--> ca967ea12c8
Successfully tagged localhost/my-image:functional-769798
ca967ea12c8b66fa454bf164f992c32faf880d7e3312a117501e4d6c4f0b1e58
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-769798 image build -t localhost/my-image:functional-769798 testdata/build --alsologtostderr:
I1213 10:21:12.665831  934832 out.go:360] Setting OutFile to fd 1 ...
I1213 10:21:12.666703  934832 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 10:21:12.666741  934832 out.go:374] Setting ErrFile to fd 2...
I1213 10:21:12.666763  934832 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 10:21:12.667058  934832 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-904040/.minikube/bin
I1213 10:21:12.668673  934832 config.go:182] Loaded profile config "functional-769798": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1213 10:21:12.669942  934832 config.go:182] Loaded profile config "functional-769798": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1213 10:21:12.671131  934832 cli_runner.go:164] Run: docker container inspect functional-769798 --format={{.State.Status}}
I1213 10:21:12.689851  934832 ssh_runner.go:195] Run: systemctl --version
I1213 10:21:12.689911  934832 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-769798
I1213 10:21:12.716167  934832 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/functional-769798/id_rsa Username:docker}
I1213 10:21:12.844143  934832 build_images.go:162] Building image from path: /tmp/build.4087128063.tar
I1213 10:21:12.844209  934832 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1213 10:21:12.851873  934832 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.4087128063.tar
I1213 10:21:12.856734  934832 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.4087128063.tar: stat -c "%s %y" /var/lib/minikube/build/build.4087128063.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.4087128063.tar': No such file or directory
I1213 10:21:12.856780  934832 ssh_runner.go:362] scp /tmp/build.4087128063.tar --> /var/lib/minikube/build/build.4087128063.tar (3072 bytes)
I1213 10:21:12.876085  934832 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.4087128063
I1213 10:21:12.895454  934832 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.4087128063 -xf /var/lib/minikube/build/build.4087128063.tar
I1213 10:21:12.909771  934832 crio.go:315] Building image: /var/lib/minikube/build/build.4087128063
I1213 10:21:12.909865  934832 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-769798 /var/lib/minikube/build/build.4087128063 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1213 10:21:16.108386  934832 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-769798 /var/lib/minikube/build/build.4087128063 --cgroup-manager=cgroupfs: (3.198492067s)
I1213 10:21:16.108458  934832 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.4087128063
I1213 10:21:16.116391  934832 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.4087128063.tar
I1213 10:21:16.124554  934832 build_images.go:218] Built localhost/my-image:functional-769798 from /tmp/build.4087128063.tar
I1213 10:21:16.124588  934832 build_images.go:134] succeeded building to: functional-769798
I1213 10:21:16.124599  934832 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-769798 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-769798
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-769798 image load --daemon kicbase/echo-server:functional-769798 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-arm64 -p functional-769798 image load --daemon kicbase/echo-server:functional-769798 --alsologtostderr: (1.066324568s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-769798 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-769798 image load --daemon kicbase/echo-server:functional-769798 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-769798 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.95s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (3.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-769798
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-769798 image load --daemon kicbase/echo-server:functional-769798 --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-linux-arm64 -p functional-769798 image load --daemon kicbase/echo-server:functional-769798 --alsologtostderr: (2.729156051s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-769798 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (3.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-769798 image save kicbase/echo-server:functional-769798 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-769798 image rm kicbase/echo-server:functional-769798 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-769798 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-769798 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-769798 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-769798
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-769798 image save --daemon kicbase/echo-server:functional-769798 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-769798
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-769798 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-769798 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-769798 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.16s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-769798
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-769798
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-769798
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22128-904040/.minikube/files/etc/test/nested/copy/907484/hosts
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (3.49s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-200955 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-200955 cache add registry.k8s.io/pause:3.1: (1.209659999s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-200955 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-200955 cache add registry.k8s.io/pause:3.3: (1.153996729s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-200955 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-200955 cache add registry.k8s.io/pause:latest: (1.129491878s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (3.49s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (1.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-200955 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialCach71846795/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-200955 cache add minikube-local-cache-test:functional-200955
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-200955 cache delete minikube-local-cache-test:functional-200955
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-200955
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (1.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.34s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-200955 ssh sudo crictl images
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.34s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.9s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-200955 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-200955 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-200955 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (301.191348ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-200955 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-200955 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.90s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (0.95s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-200955 logs
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (0.95s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (0.95s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-200955 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs3423804669/001/logs.txt
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (0.95s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.49s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-200955 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-200955 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-200955 config get cpus: exit status 14 (77.782326ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-200955 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-200955 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-200955 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-200955 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-200955 config get cpus: exit status 14 (68.022201ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.49s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.44s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-200955 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-200955 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 23 (186.320781ms)

                                                
                                                
-- stdout --
	* [functional-200955] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22128
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22128-904040/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-904040/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 10:50:22.852089  964493 out.go:360] Setting OutFile to fd 1 ...
	I1213 10:50:22.852281  964493 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:50:22.852315  964493 out.go:374] Setting ErrFile to fd 2...
	I1213 10:50:22.852335  964493 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:50:22.852622  964493 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-904040/.minikube/bin
	I1213 10:50:22.853036  964493 out.go:368] Setting JSON to false
	I1213 10:50:22.853970  964493 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":19972,"bootTime":1765603051,"procs":158,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1213 10:50:22.854067  964493 start.go:143] virtualization:  
	I1213 10:50:22.857498  964493 out.go:179] * [functional-200955] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 10:50:22.860439  964493 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 10:50:22.860584  964493 notify.go:221] Checking for updates...
	I1213 10:50:22.866266  964493 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 10:50:22.869219  964493 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22128-904040/kubeconfig
	I1213 10:50:22.872214  964493 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-904040/.minikube
	I1213 10:50:22.875155  964493 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 10:50:22.878158  964493 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 10:50:22.881486  964493 config.go:182] Loaded profile config "functional-200955": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 10:50:22.882144  964493 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 10:50:22.909605  964493 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 10:50:22.909753  964493 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:50:22.966861  964493 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 10:50:22.957608932 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:50:22.966968  964493 docker.go:319] overlay module found
	I1213 10:50:22.970123  964493 out.go:179] * Using the docker driver based on existing profile
	I1213 10:50:22.974221  964493 start.go:309] selected driver: docker
	I1213 10:50:22.974244  964493 start.go:927] validating driver "docker" against &{Name:functional-200955 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-200955 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:50:22.974343  964493 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 10:50:22.977842  964493 out.go:203] 
	W1213 10:50:22.980690  964493 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1213 10:50:22.983850  964493 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-200955 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.44s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-200955 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-200955 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 23 (201.600939ms)

                                                
                                                
-- stdout --
	* [functional-200955] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22128
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22128-904040/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-904040/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 10:50:22.658974  964446 out.go:360] Setting OutFile to fd 1 ...
	I1213 10:50:22.659186  964446 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:50:22.659218  964446 out.go:374] Setting ErrFile to fd 2...
	I1213 10:50:22.659240  964446 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:50:22.659641  964446 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-904040/.minikube/bin
	I1213 10:50:22.660099  964446 out.go:368] Setting JSON to false
	I1213 10:50:22.661027  964446 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":19972,"bootTime":1765603051,"procs":159,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1213 10:50:22.661136  964446 start.go:143] virtualization:  
	I1213 10:50:22.664602  964446 out.go:179] * [functional-200955] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1213 10:50:22.667672  964446 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 10:50:22.667873  964446 notify.go:221] Checking for updates...
	I1213 10:50:22.673671  964446 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 10:50:22.676630  964446 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22128-904040/kubeconfig
	I1213 10:50:22.679451  964446 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-904040/.minikube
	I1213 10:50:22.682241  964446 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 10:50:22.685209  964446 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 10:50:22.688607  964446 config.go:182] Loaded profile config "functional-200955": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 10:50:22.689209  964446 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 10:50:22.722904  964446 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 10:50:22.723028  964446 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:50:22.779841  964446 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 10:50:22.770393282 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:50:22.779956  964446 docker.go:319] overlay module found
	I1213 10:50:22.784907  964446 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1213 10:50:22.787665  964446 start.go:309] selected driver: docker
	I1213 10:50:22.787689  964446 start.go:927] validating driver "docker" against &{Name:functional-200955 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-200955 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:50:22.787805  964446 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 10:50:22.791354  964446 out.go:203] 
	W1213 10:50:22.794189  964446 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1213 10:50:22.797059  964446 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-200955 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-200955 addons list -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.71s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-200955 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-200955 ssh "cat /etc/hostname"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.71s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (2.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-200955 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-200955 ssh -n functional-200955 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-200955 cp functional-200955:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp11267318/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-200955 ssh -n functional-200955 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-200955 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-200955 ssh -n functional-200955 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (2.18s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.29s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/907484/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-200955 ssh "sudo cat /etc/test/nested/copy/907484/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.29s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.71s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/907484.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-200955 ssh "sudo cat /etc/ssl/certs/907484.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/907484.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-200955 ssh "sudo cat /usr/share/ca-certificates/907484.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-200955 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/9074842.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-200955 ssh "sudo cat /etc/ssl/certs/9074842.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/9074842.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-200955 ssh "sudo cat /usr/share/ca-certificates/9074842.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-200955 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.71s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.57s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-200955 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-200955 ssh "sudo systemctl is-active docker": exit status 1 (298.374713ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-200955 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-200955 ssh "sudo systemctl is-active containerd": exit status 1 (267.290405ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.57s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-200955 tunnel --alsologtostderr]
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.1s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-200955 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: exit status 103
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.10s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "335.264536ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "55.240898ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.4s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "339.562475ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "55.566374ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.40s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.87s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-200955 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2010494100/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-200955 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-200955 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (330.454837ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1213 10:50:16.705274  907484 retry.go:31] will retry after 509.046191ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-200955 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-200955 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-200955 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2010494100/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-200955 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-200955 ssh "sudo umount -f /mount-9p": exit status 1 (276.759975ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-200955 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-200955 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2010494100/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.87s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.32s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-200955 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo608454427/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-200955 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo608454427/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-200955 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo608454427/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-200955 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-200955 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-200955 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-200955 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-200955 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo608454427/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-200955 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo608454427/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-200955 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo608454427/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.32s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-200955 version --short
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.51s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-200955 version -o=json --components
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.51s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-200955 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-200955 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0-beta.0
registry.k8s.io/kube-proxy:v1.35.0-beta.0
registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
registry.k8s.io/kube-apiserver:v1.35.0-beta.0
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.13.1
localhost/minikube-local-cache-test:functional-200955
localhost/kicbase/echo-server:functional-200955
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-200955 image ls --format short --alsologtostderr:
I1213 10:50:35.414124  966634 out.go:360] Setting OutFile to fd 1 ...
I1213 10:50:35.414242  966634 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 10:50:35.414252  966634 out.go:374] Setting ErrFile to fd 2...
I1213 10:50:35.414257  966634 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 10:50:35.414531  966634 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-904040/.minikube/bin
I1213 10:50:35.415185  966634 config.go:182] Loaded profile config "functional-200955": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1213 10:50:35.415321  966634 config.go:182] Loaded profile config "functional-200955": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1213 10:50:35.415845  966634 cli_runner.go:164] Run: docker container inspect functional-200955 --format={{.State.Status}}
I1213 10:50:35.433474  966634 ssh_runner.go:195] Run: systemctl --version
I1213 10:50:35.433531  966634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-200955
I1213 10:50:35.451660  966634 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33523 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/functional-200955/id_rsa Username:docker}
I1213 10:50:35.556213  966634 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-200955 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-200955 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/pause                   │ 3.1                │ 8057e0500773a │ 529kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 3d18732f8686c │ 487kB  │
│ registry.k8s.io/pause                   │ latest             │ 8cb2091f603e7 │ 246kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ b1a8c6f707935 │ 111MB  │
│ localhost/minikube-local-cache-test     │ functional-200955  │ 8f4d1b410c02d │ 3.33kB │
│ registry.k8s.io/coredns/coredns         │ v1.13.1            │ e08f4d9d2e6ed │ 74.5MB │
│ registry.k8s.io/etcd                    │ 3.6.5-0            │ 2c5f0dedd21c2 │ 60.9MB │
│ registry.k8s.io/kube-apiserver          │ v1.35.0-beta.0     │ ccd634d9bcc36 │ 85MB   │
│ registry.k8s.io/kube-proxy              │ v1.35.0-beta.0     │ 404c2e1286177 │ 74.1MB │
│ registry.k8s.io/kube-scheduler          │ v1.35.0-beta.0     │ 16378741539f1 │ 49.8MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ d7b100cd9a77b │ 520kB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ ba04bb24b9575 │ 29MB   │
│ localhost/kicbase/echo-server           │ functional-200955  │ ce2d2cda2d858 │ 4.79MB │
│ registry.k8s.io/kube-controller-manager │ v1.35.0-beta.0     │ 68b5f775f1876 │ 72.2MB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-200955 image ls --format table --alsologtostderr:
I1213 10:50:35.882479  966712 out.go:360] Setting OutFile to fd 1 ...
I1213 10:50:35.882686  966712 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 10:50:35.882698  966712 out.go:374] Setting ErrFile to fd 2...
I1213 10:50:35.882704  966712 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 10:50:35.882994  966712 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-904040/.minikube/bin
I1213 10:50:35.883629  966712 config.go:182] Loaded profile config "functional-200955": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1213 10:50:35.883809  966712 config.go:182] Loaded profile config "functional-200955": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1213 10:50:35.884373  966712 cli_runner.go:164] Run: docker container inspect functional-200955 --format={{.State.Status}}
I1213 10:50:35.902011  966712 ssh_runner.go:195] Run: systemctl --version
I1213 10:50:35.902079  966712 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-200955
I1213 10:50:35.919480  966712 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33523 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/functional-200955/id_rsa Username:docker}
I1213 10:50:36.024827  966712 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-200955 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-200955 image ls --format json --alsologtostderr:
[{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"111333938"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":["localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370
ee11a2e34ab577b97e6011d795cb13534d4a"],"repoTags":["localhost/kicbase/echo-server:functional-200955"],"size":"4788229"},{"id":"8f4d1b410c02db353e58f633e741039422bb1298d0f4174987825a4ee14aa43e","repoDigests":["localhost/minikube-local-cache-test@sha256:f1d3606334087689ee570d7af4398c3c146011d3370d3071e72589d72b98048f"],"repoTags":["localhost/minikube-local-cache-test:functional-200955"],"size":"3330"},{"id":"e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6","registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"74491780"},{"id":"2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534","registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8
674deac4a65f7167c4cd85f7f2941e"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"60857170"},{"id":"ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4","repoDigests":["registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58","registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0-beta.0"],"size":"84949999"},{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"519884"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.
io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d","registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"],"size":"72170325"},{"id":"404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904","repoDigests":["registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478","registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0-beta.0"],"size":"74106775"},{"id":"16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c
43a409b","repoDigests":["registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6","registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0-beta.0"],"size":"49822549"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-200955 image ls --format json --alsologtostderr:
I1213 10:50:35.645519  966670 out.go:360] Setting OutFile to fd 1 ...
I1213 10:50:35.645706  966670 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 10:50:35.645736  966670 out.go:374] Setting ErrFile to fd 2...
I1213 10:50:35.645758  966670 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 10:50:35.646020  966670 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-904040/.minikube/bin
I1213 10:50:35.646664  966670 config.go:182] Loaded profile config "functional-200955": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1213 10:50:35.646825  966670 config.go:182] Loaded profile config "functional-200955": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1213 10:50:35.647378  966670 cli_runner.go:164] Run: docker container inspect functional-200955 --format={{.State.Status}}
I1213 10:50:35.664909  966670 ssh_runner.go:195] Run: systemctl --version
I1213 10:50:35.664976  966670 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-200955
I1213 10:50:35.681943  966670 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33523 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/functional-200955/id_rsa Username:docker}
I1213 10:50:35.792100  966670 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-200955 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-200955 image ls --format yaml --alsologtostderr:
- id: 2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
- registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "60857170"
- id: ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58
- registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0-beta.0
size: "84949999"
- id: 68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d
- registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
size: "72170325"
- id: 16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6
- registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0-beta.0
size: "49822549"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests:
- localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a
repoTags:
- localhost/kicbase/echo-server:functional-200955
size: "4788229"
- id: 404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904
repoDigests:
- registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478
- registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0-beta.0
size: "74106775"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f
repoTags:
- registry.k8s.io/pause:3.10.1
size: "519884"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "111333938"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 8f4d1b410c02db353e58f633e741039422bb1298d0f4174987825a4ee14aa43e
repoDigests:
- localhost/minikube-local-cache-test@sha256:f1d3606334087689ee570d7af4398c3c146011d3370d3071e72589d72b98048f
repoTags:
- localhost/minikube-local-cache-test:functional-200955
size: "3330"
- id: e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6
- registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "74491780"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-200955 image ls --format yaml --alsologtostderr:
I1213 10:50:36.115205  966750 out.go:360] Setting OutFile to fd 1 ...
I1213 10:50:36.115384  966750 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 10:50:36.115416  966750 out.go:374] Setting ErrFile to fd 2...
I1213 10:50:36.115436  966750 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 10:50:36.115827  966750 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-904040/.minikube/bin
I1213 10:50:36.117235  966750 config.go:182] Loaded profile config "functional-200955": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1213 10:50:36.117450  966750 config.go:182] Loaded profile config "functional-200955": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1213 10:50:36.118064  966750 cli_runner.go:164] Run: docker container inspect functional-200955 --format={{.State.Status}}
I1213 10:50:36.136074  966750 ssh_runner.go:195] Run: systemctl --version
I1213 10:50:36.136137  966750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-200955
I1213 10:50:36.152923  966750 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33523 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/functional-200955/id_rsa Username:docker}
I1213 10:50:36.256583  966750 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (3.67s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-200955 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-200955 ssh pgrep buildkitd: exit status 1 (268.090186ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-200955 image build -t localhost/my-image:functional-200955 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-200955 image build -t localhost/my-image:functional-200955 testdata/build --alsologtostderr: (3.17682477s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-arm64 -p functional-200955 image build -t localhost/my-image:functional-200955 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 4a41746ef95
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-200955
--> 848dda52743
Successfully tagged localhost/my-image:functional-200955
848dda52743f536ccbac3b6a99f1b8b3c57109b53155773b1b7459bf55fdaeae
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-200955 image build -t localhost/my-image:functional-200955 testdata/build --alsologtostderr:
I1213 10:50:36.615414  966855 out.go:360] Setting OutFile to fd 1 ...
I1213 10:50:36.615583  966855 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 10:50:36.615614  966855 out.go:374] Setting ErrFile to fd 2...
I1213 10:50:36.615636  966855 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 10:50:36.615929  966855 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-904040/.minikube/bin
I1213 10:50:36.616581  966855 config.go:182] Loaded profile config "functional-200955": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1213 10:50:36.617279  966855 config.go:182] Loaded profile config "functional-200955": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1213 10:50:36.617988  966855 cli_runner.go:164] Run: docker container inspect functional-200955 --format={{.State.Status}}
I1213 10:50:36.635696  966855 ssh_runner.go:195] Run: systemctl --version
I1213 10:50:36.635770  966855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-200955
I1213 10:50:36.654066  966855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33523 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/functional-200955/id_rsa Username:docker}
I1213 10:50:36.756298  966855 build_images.go:162] Building image from path: /tmp/build.1776380377.tar
I1213 10:50:36.756374  966855 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1213 10:50:36.764173  966855 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1776380377.tar
I1213 10:50:36.767730  966855 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1776380377.tar: stat -c "%s %y" /var/lib/minikube/build/build.1776380377.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1776380377.tar': No such file or directory
I1213 10:50:36.767760  966855 ssh_runner.go:362] scp /tmp/build.1776380377.tar --> /var/lib/minikube/build/build.1776380377.tar (3072 bytes)
I1213 10:50:36.785146  966855 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1776380377
I1213 10:50:36.794074  966855 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1776380377 -xf /var/lib/minikube/build/build.1776380377.tar
I1213 10:50:36.802612  966855 crio.go:315] Building image: /var/lib/minikube/build/build.1776380377
I1213 10:50:36.802689  966855 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-200955 /var/lib/minikube/build/build.1776380377 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1213 10:50:39.713874  966855 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-200955 /var/lib/minikube/build/build.1776380377 --cgroup-manager=cgroupfs: (2.911150692s)
I1213 10:50:39.713951  966855 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1776380377
I1213 10:50:39.721931  966855 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1776380377.tar
I1213 10:50:39.729741  966855 build_images.go:218] Built localhost/my-image:functional-200955 from /tmp/build.1776380377.tar
I1213 10:50:39.729773  966855 build_images.go:134] succeeded building to: functional-200955
I1213 10:50:39.729779  966855 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-200955 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (3.67s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-200955
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-200955 image load --daemon kicbase/echo-server:functional-200955 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-200955 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (0.83s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-200955 image load --daemon kicbase/echo-server:functional-200955 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-200955 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (0.83s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-200955
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-200955 image load --daemon kicbase/echo-server:functional-200955 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-200955 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.36s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-200955 image save kicbase/echo-server:functional-200955 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.36s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.56s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-200955 image rm kicbase/echo-server:functional-200955 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-200955 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.56s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.77s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-200955 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-200955 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.77s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.43s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-200955
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-200955 image save --daemon kicbase/echo-server:functional-200955 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-200955
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.43s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-200955 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-200955 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-200955 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-200955
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-200955
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-200955
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (203.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-362091 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1213 10:53:25.225700  907484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:53:25.232056  907484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:53:25.243449  907484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:53:25.264812  907484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:53:25.306409  907484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:53:25.387884  907484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:53:25.549355  907484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:53:25.871031  907484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:53:26.513034  907484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:53:27.794361  907484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:53:30.356550  907484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:53:35.478027  907484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:53:37.841730  907484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/addons-054604/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:53:45.720244  907484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:54:06.201698  907484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:54:47.163831  907484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:55:25.725811  907484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-769798/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-362091 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (3m22.649013385s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-362091 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (203.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-362091 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-362091 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-362091 kubectl -- rollout status deployment/busybox: (3.655148396s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-362091 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-362091 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-362091 kubectl -- exec busybox-7b57f96db7-2r46p -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-362091 kubectl -- exec busybox-7b57f96db7-drz7b -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-362091 kubectl -- exec busybox-7b57f96db7-pb2wt -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-362091 kubectl -- exec busybox-7b57f96db7-2r46p -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-362091 kubectl -- exec busybox-7b57f96db7-drz7b -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-362091 kubectl -- exec busybox-7b57f96db7-pb2wt -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-362091 kubectl -- exec busybox-7b57f96db7-2r46p -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-362091 kubectl -- exec busybox-7b57f96db7-drz7b -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-362091 kubectl -- exec busybox-7b57f96db7-pb2wt -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-362091 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-362091 kubectl -- exec busybox-7b57f96db7-2r46p -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-362091 kubectl -- exec busybox-7b57f96db7-2r46p -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-362091 kubectl -- exec busybox-7b57f96db7-drz7b -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-362091 kubectl -- exec busybox-7b57f96db7-drz7b -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-362091 kubectl -- exec busybox-7b57f96db7-pb2wt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-362091 kubectl -- exec busybox-7b57f96db7-pb2wt -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (59.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-362091 node add --alsologtostderr -v 5
E1213 10:56:09.087750  907484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-362091 node add --alsologtostderr -v 5: (57.905169339s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-362091 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-362091 status --alsologtostderr -v 5: (1.318525343s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (59.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-362091 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.113851039s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (20.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-362091 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-362091 status --output json --alsologtostderr -v 5: (1.083957638s)
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-362091 cp testdata/cp-test.txt ha-362091:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-362091 ssh -n ha-362091 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-362091 cp ha-362091:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2618286647/001/cp-test_ha-362091.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-362091 ssh -n ha-362091 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-362091 cp ha-362091:/home/docker/cp-test.txt ha-362091-m02:/home/docker/cp-test_ha-362091_ha-362091-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-362091 ssh -n ha-362091 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-362091 ssh -n ha-362091-m02 "sudo cat /home/docker/cp-test_ha-362091_ha-362091-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-362091 cp ha-362091:/home/docker/cp-test.txt ha-362091-m03:/home/docker/cp-test_ha-362091_ha-362091-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-362091 ssh -n ha-362091 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-362091 ssh -n ha-362091-m03 "sudo cat /home/docker/cp-test_ha-362091_ha-362091-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-362091 cp ha-362091:/home/docker/cp-test.txt ha-362091-m04:/home/docker/cp-test_ha-362091_ha-362091-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-362091 ssh -n ha-362091 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-362091 ssh -n ha-362091-m04 "sudo cat /home/docker/cp-test_ha-362091_ha-362091-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-362091 cp testdata/cp-test.txt ha-362091-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-362091 ssh -n ha-362091-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-362091 cp ha-362091-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2618286647/001/cp-test_ha-362091-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-362091 ssh -n ha-362091-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-362091 cp ha-362091-m02:/home/docker/cp-test.txt ha-362091:/home/docker/cp-test_ha-362091-m02_ha-362091.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-362091 ssh -n ha-362091-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-362091 ssh -n ha-362091 "sudo cat /home/docker/cp-test_ha-362091-m02_ha-362091.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-362091 cp ha-362091-m02:/home/docker/cp-test.txt ha-362091-m03:/home/docker/cp-test_ha-362091-m02_ha-362091-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-362091 ssh -n ha-362091-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-362091 ssh -n ha-362091-m03 "sudo cat /home/docker/cp-test_ha-362091-m02_ha-362091-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-362091 cp ha-362091-m02:/home/docker/cp-test.txt ha-362091-m04:/home/docker/cp-test_ha-362091-m02_ha-362091-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-362091 ssh -n ha-362091-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-362091 ssh -n ha-362091-m04 "sudo cat /home/docker/cp-test_ha-362091-m02_ha-362091-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-362091 cp testdata/cp-test.txt ha-362091-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-362091 ssh -n ha-362091-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-362091 cp ha-362091-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2618286647/001/cp-test_ha-362091-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-362091 ssh -n ha-362091-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-362091 cp ha-362091-m03:/home/docker/cp-test.txt ha-362091:/home/docker/cp-test_ha-362091-m03_ha-362091.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-362091 ssh -n ha-362091-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-362091 ssh -n ha-362091 "sudo cat /home/docker/cp-test_ha-362091-m03_ha-362091.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-362091 cp ha-362091-m03:/home/docker/cp-test.txt ha-362091-m02:/home/docker/cp-test_ha-362091-m03_ha-362091-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-362091 ssh -n ha-362091-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-362091 ssh -n ha-362091-m02 "sudo cat /home/docker/cp-test_ha-362091-m03_ha-362091-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-362091 cp ha-362091-m03:/home/docker/cp-test.txt ha-362091-m04:/home/docker/cp-test_ha-362091-m03_ha-362091-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-362091 ssh -n ha-362091-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-362091 ssh -n ha-362091-m04 "sudo cat /home/docker/cp-test_ha-362091-m03_ha-362091-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-362091 cp testdata/cp-test.txt ha-362091-m04:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-362091 ssh -n ha-362091-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-362091 cp ha-362091-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2618286647/001/cp-test_ha-362091-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-362091 ssh -n ha-362091-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-362091 cp ha-362091-m04:/home/docker/cp-test.txt ha-362091:/home/docker/cp-test_ha-362091-m04_ha-362091.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-362091 ssh -n ha-362091-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-362091 ssh -n ha-362091 "sudo cat /home/docker/cp-test_ha-362091-m04_ha-362091.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-362091 cp ha-362091-m04:/home/docker/cp-test.txt ha-362091-m02:/home/docker/cp-test_ha-362091-m04_ha-362091-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-362091 ssh -n ha-362091-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-362091 ssh -n ha-362091-m02 "sudo cat /home/docker/cp-test_ha-362091-m04_ha-362091-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-362091 cp ha-362091-m04:/home/docker/cp-test.txt ha-362091-m03:/home/docker/cp-test_ha-362091-m04_ha-362091-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-362091 ssh -n ha-362091-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-362091 ssh -n ha-362091-m03 "sudo cat /home/docker/cp-test_ha-362091-m04_ha-362091-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (20.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-362091 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-362091 node stop m02 --alsologtostderr -v 5: (12.089095164s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-362091 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-362091 status --alsologtostderr -v 5: exit status 7 (797.487578ms)

                                                
                                                
-- stdout --
	ha-362091
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-362091-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-362091-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-362091-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 10:57:34.327943  982987 out.go:360] Setting OutFile to fd 1 ...
	I1213 10:57:34.328057  982987 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:57:34.328069  982987 out.go:374] Setting ErrFile to fd 2...
	I1213 10:57:34.328075  982987 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:57:34.328423  982987 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-904040/.minikube/bin
	I1213 10:57:34.328645  982987 out.go:368] Setting JSON to false
	I1213 10:57:34.328716  982987 mustload.go:66] Loading cluster: ha-362091
	I1213 10:57:34.329494  982987 config.go:182] Loaded profile config "ha-362091": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 10:57:34.329566  982987 status.go:174] checking status of ha-362091 ...
	I1213 10:57:34.330467  982987 cli_runner.go:164] Run: docker container inspect ha-362091 --format={{.State.Status}}
	I1213 10:57:34.331109  982987 notify.go:221] Checking for updates...
	I1213 10:57:34.351427  982987 status.go:371] ha-362091 host status = "Running" (err=<nil>)
	I1213 10:57:34.351468  982987 host.go:66] Checking if "ha-362091" exists ...
	I1213 10:57:34.351776  982987 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-362091
	I1213 10:57:34.380895  982987 host.go:66] Checking if "ha-362091" exists ...
	I1213 10:57:34.381216  982987 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 10:57:34.381275  982987 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-362091
	I1213 10:57:34.400966  982987 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/ha-362091/id_rsa Username:docker}
	I1213 10:57:34.515244  982987 ssh_runner.go:195] Run: systemctl --version
	I1213 10:57:34.521882  982987 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 10:57:34.535800  982987 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:57:34.601848  982987 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:62 OomKillDisable:true NGoroutines:72 SystemTime:2025-12-13 10:57:34.590390111 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:57:34.602419  982987 kubeconfig.go:125] found "ha-362091" server: "https://192.168.49.254:8443"
	I1213 10:57:34.602557  982987 api_server.go:166] Checking apiserver status ...
	I1213 10:57:34.602626  982987 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:57:34.615196  982987 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1260/cgroup
	I1213 10:57:34.626268  982987 api_server.go:182] apiserver freezer: "6:freezer:/docker/faf9004d6263ea6ef41c268452e74f2da9a52c9bbef9e327b294e100bc056aa9/crio/crio-d1cdac999cdaba42d42a3c64dbf88f2a4217fba5ca15aed173ac3fbac2aa0f12"
	I1213 10:57:34.626394  982987 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/faf9004d6263ea6ef41c268452e74f2da9a52c9bbef9e327b294e100bc056aa9/crio/crio-d1cdac999cdaba42d42a3c64dbf88f2a4217fba5ca15aed173ac3fbac2aa0f12/freezer.state
	I1213 10:57:34.634992  982987 api_server.go:204] freezer state: "THAWED"
	I1213 10:57:34.635018  982987 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1213 10:57:34.644678  982987 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1213 10:57:34.644714  982987 status.go:463] ha-362091 apiserver status = Running (err=<nil>)
	I1213 10:57:34.644735  982987 status.go:176] ha-362091 status: &{Name:ha-362091 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 10:57:34.644754  982987 status.go:174] checking status of ha-362091-m02 ...
	I1213 10:57:34.645087  982987 cli_runner.go:164] Run: docker container inspect ha-362091-m02 --format={{.State.Status}}
	I1213 10:57:34.671281  982987 status.go:371] ha-362091-m02 host status = "Stopped" (err=<nil>)
	I1213 10:57:34.671305  982987 status.go:384] host is not running, skipping remaining checks
	I1213 10:57:34.671313  982987 status.go:176] ha-362091-m02 status: &{Name:ha-362091-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 10:57:34.671334  982987 status.go:174] checking status of ha-362091-m03 ...
	I1213 10:57:34.671687  982987 cli_runner.go:164] Run: docker container inspect ha-362091-m03 --format={{.State.Status}}
	I1213 10:57:34.690380  982987 status.go:371] ha-362091-m03 host status = "Running" (err=<nil>)
	I1213 10:57:34.690406  982987 host.go:66] Checking if "ha-362091-m03" exists ...
	I1213 10:57:34.690718  982987 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-362091-m03
	I1213 10:57:34.707284  982987 host.go:66] Checking if "ha-362091-m03" exists ...
	I1213 10:57:34.707622  982987 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 10:57:34.707671  982987 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-362091-m03
	I1213 10:57:34.725490  982987 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33538 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/ha-362091-m03/id_rsa Username:docker}
	I1213 10:57:34.831206  982987 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 10:57:34.845283  982987 kubeconfig.go:125] found "ha-362091" server: "https://192.168.49.254:8443"
	I1213 10:57:34.845314  982987 api_server.go:166] Checking apiserver status ...
	I1213 10:57:34.845369  982987 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:57:34.858170  982987 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1202/cgroup
	I1213 10:57:34.866750  982987 api_server.go:182] apiserver freezer: "6:freezer:/docker/dfd92a72c1f365cbf78b74460a5ca009dcbcc5573471d101e0bd8f363f0355d3/crio/crio-b5aab86080cd46cd79a6aafe96dde004439be5afeb55ce12772a03bf57bbf37d"
	I1213 10:57:34.866828  982987 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/dfd92a72c1f365cbf78b74460a5ca009dcbcc5573471d101e0bd8f363f0355d3/crio/crio-b5aab86080cd46cd79a6aafe96dde004439be5afeb55ce12772a03bf57bbf37d/freezer.state
	I1213 10:57:34.874834  982987 api_server.go:204] freezer state: "THAWED"
	I1213 10:57:34.874861  982987 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1213 10:57:34.883235  982987 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1213 10:57:34.883264  982987 status.go:463] ha-362091-m03 apiserver status = Running (err=<nil>)
	I1213 10:57:34.883274  982987 status.go:176] ha-362091-m03 status: &{Name:ha-362091-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 10:57:34.883290  982987 status.go:174] checking status of ha-362091-m04 ...
	I1213 10:57:34.883655  982987 cli_runner.go:164] Run: docker container inspect ha-362091-m04 --format={{.State.Status}}
	I1213 10:57:34.902892  982987 status.go:371] ha-362091-m04 host status = "Running" (err=<nil>)
	I1213 10:57:34.902921  982987 host.go:66] Checking if "ha-362091-m04" exists ...
	I1213 10:57:34.903248  982987 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-362091-m04
	I1213 10:57:34.920901  982987 host.go:66] Checking if "ha-362091-m04" exists ...
	I1213 10:57:34.921248  982987 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 10:57:34.921295  982987 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-362091-m04
	I1213 10:57:34.939258  982987 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33543 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/ha-362091-m04/id_rsa Username:docker}
	I1213 10:57:35.048983  982987 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 10:57:35.068900  982987 status.go:176] ha-362091-m04 status: &{Name:ha-362091-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (32.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-362091 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-362091 node start m02 --alsologtostderr -v 5: (31.226429281s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-362091 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-362091 status --alsologtostderr -v 5: (1.342628408s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (32.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.349656996s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (138.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-362091 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-362091 stop --alsologtostderr -v 5
E1213 10:58:25.221475  907484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:58:28.799886  907484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-769798/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:58:37.839906  907484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/addons-054604/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-362091 stop --alsologtostderr -v 5: (37.7029185s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-362091 start --wait true --alsologtostderr -v 5
E1213 10:58:52.930022  907484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:00:25.726251  907484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-769798/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-362091 start --wait true --alsologtostderr -v 5: (1m40.51293917s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-362091 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (138.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (12.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-362091 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-362091 node delete m03 --alsologtostderr -v 5: (11.146006671s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-362091 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (12.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-362091 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-362091 stop --alsologtostderr -v 5: (36.013924677s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-362091 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-362091 status --alsologtostderr -v 5: exit status 7 (121.944584ms)

                                                
                                                
-- stdout --
	ha-362091
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-362091-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-362091-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 11:01:17.350670  995187 out.go:360] Setting OutFile to fd 1 ...
	I1213 11:01:17.350812  995187 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:01:17.350824  995187 out.go:374] Setting ErrFile to fd 2...
	I1213 11:01:17.350830  995187 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:01:17.351211  995187 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-904040/.minikube/bin
	I1213 11:01:17.351555  995187 out.go:368] Setting JSON to false
	I1213 11:01:17.351613  995187 mustload.go:66] Loading cluster: ha-362091
	I1213 11:01:17.352349  995187 config.go:182] Loaded profile config "ha-362091": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 11:01:17.352481  995187 status.go:174] checking status of ha-362091 ...
	I1213 11:01:17.352643  995187 notify.go:221] Checking for updates...
	I1213 11:01:17.353260  995187 cli_runner.go:164] Run: docker container inspect ha-362091 --format={{.State.Status}}
	I1213 11:01:17.371971  995187 status.go:371] ha-362091 host status = "Stopped" (err=<nil>)
	I1213 11:01:17.371997  995187 status.go:384] host is not running, skipping remaining checks
	I1213 11:01:17.372005  995187 status.go:176] ha-362091 status: &{Name:ha-362091 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 11:01:17.372030  995187 status.go:174] checking status of ha-362091-m02 ...
	I1213 11:01:17.372351  995187 cli_runner.go:164] Run: docker container inspect ha-362091-m02 --format={{.State.Status}}
	I1213 11:01:17.398487  995187 status.go:371] ha-362091-m02 host status = "Stopped" (err=<nil>)
	I1213 11:01:17.398514  995187 status.go:384] host is not running, skipping remaining checks
	I1213 11:01:17.398522  995187 status.go:176] ha-362091-m02 status: &{Name:ha-362091-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 11:01:17.398541  995187 status.go:174] checking status of ha-362091-m04 ...
	I1213 11:01:17.398852  995187 cli_runner.go:164] Run: docker container inspect ha-362091-m04 --format={{.State.Status}}
	I1213 11:01:17.417009  995187 status.go:371] ha-362091-m04 host status = "Stopped" (err=<nil>)
	I1213 11:01:17.417033  995187 status.go:384] host is not running, skipping remaining checks
	I1213 11:01:17.417040  995187 status.go:176] ha-362091-m04 status: &{Name:ha-362091-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (93.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-362091 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-362091 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (1m32.465897466s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-362091 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (93.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (63.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-362091 node add --control-plane --alsologtostderr -v 5
E1213 11:03:25.226924  907484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:03:37.840904  907484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/addons-054604/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-362091 node add --control-plane --alsologtostderr -v 5: (1m2.439731979s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-362091 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-362091 status --alsologtostderr -v 5: (1.0962435s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (63.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.045679834s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.05s)

                                                
                                    
x
+
TestJSONOutput/start/Command (79.45s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-608785 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-608785 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m19.439499061s)
--- PASS: TestJSONOutput/start/Command (79.45s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.92s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-608785 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-608785 --output=json --user=testUser: (5.922534708s)
--- PASS: TestJSONOutput/stop/Command (5.92s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.25s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-853126 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-853126 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (95.264727ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"4277f77c-4862-43ea-a111-0dc8ee235bb5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-853126] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"7e66e8b8-751c-41ae-8454-64105bbac924","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22128"}}
	{"specversion":"1.0","id":"00237e1e-4cbe-4675-95ab-473f9e02ebd8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"d6e62d05-938f-4bb5-9330-44c686402131","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22128-904040/kubeconfig"}}
	{"specversion":"1.0","id":"c0671ab6-a72e-4b6e-a049-00e2767e9c8b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-904040/.minikube"}}
	{"specversion":"1.0","id":"8c4c6062-bb4f-40c3-8cca-c35a321526b6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"87e69652-41b9-48a2-b689-b8c64cfb5c4d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"5f1e1d15-0c99-4986-8d9e-0d7ea5f38c38","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-853126" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-853126
--- PASS: TestErrorJSONOutput (0.25s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (39.17s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-348871 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-348871 --network=: (36.903812462s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-348871" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-348871
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-348871: (2.249900851s)
--- PASS: TestKicCustomNetwork/create_custom_network (39.17s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (35.86s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-108927 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-108927 --network=bridge: (33.793189708s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-108927" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-108927
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-108927: (2.037231064s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (35.86s)

                                                
                                    
x
+
TestKicExistingNetwork (34.13s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1213 11:06:53.882207  907484 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1213 11:06:53.898081  907484 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1213 11:06:53.898866  907484 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1213 11:06:53.898918  907484 cli_runner.go:164] Run: docker network inspect existing-network
W1213 11:06:53.915344  907484 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1213 11:06:53.915377  907484 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1213 11:06:53.915393  907484 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1213 11:06:53.915499  907484 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1213 11:06:53.933518  907484 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-79c0f817da2f IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:0a:02:cb:26:48:e7} reservation:<nil>}
I1213 11:06:53.933977  907484 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001547290}
I1213 11:06:53.934010  907484 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1213 11:06:53.934066  907484 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1213 11:06:54.000299  907484 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-338484 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-338484 --network=existing-network: (31.836182682s)
helpers_test.go:176: Cleaning up "existing-network-338484" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-338484
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-338484: (2.13351895s)
I1213 11:07:27.990305  907484 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (34.13s)

                                                
                                    
x
+
TestKicCustomSubnet (38.3s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-417199 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-417199 --subnet=192.168.60.0/24: (35.953017843s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-417199 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:176: Cleaning up "custom-subnet-417199" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-417199
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-417199: (2.330337499s)
--- PASS: TestKicCustomSubnet (38.30s)

                                                
                                    
x
+
TestKicStaticIP (38.42s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-914677 --static-ip=192.168.200.200
E1213 11:08:20.921720  907484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/addons-054604/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:08:25.223047  907484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:08:37.840947  907484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/addons-054604/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-914677 --static-ip=192.168.200.200: (35.877116523s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-914677 ip
helpers_test.go:176: Cleaning up "static-ip-914677" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-914677
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-914677: (2.360252769s)
--- PASS: TestKicStaticIP (38.42s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (70.83s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-021224 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-021224 --driver=docker  --container-runtime=crio: (34.034534966s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-025293 --driver=docker  --container-runtime=crio
E1213 11:09:48.292214  907484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-025293 --driver=docker  --container-runtime=crio: (31.129230954s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-021224
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-025293
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:176: Cleaning up "second-025293" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p second-025293
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p second-025293: (2.047893543s)
helpers_test.go:176: Cleaning up "first-021224" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p first-021224
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p first-021224: (2.039466742s)
--- PASS: TestMinikubeProfile (70.83s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.87s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-862990 --memory=3072 --mount-string /tmp/TestMountStartserial2416214543/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-862990 --memory=3072 --mount-string /tmp/TestMountStartserial2416214543/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.863898086s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.87s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-862990 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (9.11s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-864951 --memory=3072 --mount-string /tmp/TestMountStartserial2416214543/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-864951 --memory=3072 --mount-string /tmp/TestMountStartserial2416214543/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (8.110534924s)
--- PASS: TestMountStart/serial/StartWithMountSecond (9.11s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-864951 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.29s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.72s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-862990 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-862990 --alsologtostderr -v=5: (1.715283234s)
--- PASS: TestMountStart/serial/DeleteFirst (1.72s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-864951 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.29s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.32s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-864951
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-864951: (1.316134453s)
--- PASS: TestMountStart/serial/Stop (1.32s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.97s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-864951
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-864951: (6.972809993s)
--- PASS: TestMountStart/serial/RestartStopped (7.97s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-864951 ssh -- ls /minikube-host
E1213 11:10:25.726566  907484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-769798/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.29s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (139.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-538180 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-538180 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (2m18.706685768s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-538180 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (139.25s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-538180 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-538180 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-538180 -- rollout status deployment/busybox: (3.005321025s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-538180 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-538180 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-538180 -- exec busybox-7b57f96db7-2t25w -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-538180 -- exec busybox-7b57f96db7-vvr2s -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-538180 -- exec busybox-7b57f96db7-2t25w -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-538180 -- exec busybox-7b57f96db7-vvr2s -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-538180 -- exec busybox-7b57f96db7-2t25w -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-538180 -- exec busybox-7b57f96db7-vvr2s -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.85s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-538180 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-538180 -- exec busybox-7b57f96db7-2t25w -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-538180 -- exec busybox-7b57f96db7-2t25w -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-538180 -- exec busybox-7b57f96db7-vvr2s -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-538180 -- exec busybox-7b57f96db7-vvr2s -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.94s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (59.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-538180 -v=5 --alsologtostderr
E1213 11:13:25.221662  907484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:13:37.840958  907484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/addons-054604/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-538180 -v=5 --alsologtostderr: (58.747341786s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-538180 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (59.47s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-538180 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.73s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-538180 status --output json --alsologtostderr
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-538180 cp testdata/cp-test.txt multinode-538180:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-538180 ssh -n multinode-538180 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-538180 cp multinode-538180:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1404734558/001/cp-test_multinode-538180.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-538180 ssh -n multinode-538180 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-538180 cp multinode-538180:/home/docker/cp-test.txt multinode-538180-m02:/home/docker/cp-test_multinode-538180_multinode-538180-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-538180 ssh -n multinode-538180 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-538180 ssh -n multinode-538180-m02 "sudo cat /home/docker/cp-test_multinode-538180_multinode-538180-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-538180 cp multinode-538180:/home/docker/cp-test.txt multinode-538180-m03:/home/docker/cp-test_multinode-538180_multinode-538180-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-538180 ssh -n multinode-538180 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-538180 ssh -n multinode-538180-m03 "sudo cat /home/docker/cp-test_multinode-538180_multinode-538180-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-538180 cp testdata/cp-test.txt multinode-538180-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-538180 ssh -n multinode-538180-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-538180 cp multinode-538180-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1404734558/001/cp-test_multinode-538180-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-538180 ssh -n multinode-538180-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-538180 cp multinode-538180-m02:/home/docker/cp-test.txt multinode-538180:/home/docker/cp-test_multinode-538180-m02_multinode-538180.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-538180 ssh -n multinode-538180-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-538180 ssh -n multinode-538180 "sudo cat /home/docker/cp-test_multinode-538180-m02_multinode-538180.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-538180 cp multinode-538180-m02:/home/docker/cp-test.txt multinode-538180-m03:/home/docker/cp-test_multinode-538180-m02_multinode-538180-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-538180 ssh -n multinode-538180-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-538180 ssh -n multinode-538180-m03 "sudo cat /home/docker/cp-test_multinode-538180-m02_multinode-538180-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-538180 cp testdata/cp-test.txt multinode-538180-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-538180 ssh -n multinode-538180-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-538180 cp multinode-538180-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1404734558/001/cp-test_multinode-538180-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-538180 ssh -n multinode-538180-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-538180 cp multinode-538180-m03:/home/docker/cp-test.txt multinode-538180:/home/docker/cp-test_multinode-538180-m03_multinode-538180.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-538180 ssh -n multinode-538180-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-538180 ssh -n multinode-538180 "sudo cat /home/docker/cp-test_multinode-538180-m03_multinode-538180.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-538180 cp multinode-538180-m03:/home/docker/cp-test.txt multinode-538180-m02:/home/docker/cp-test_multinode-538180-m03_multinode-538180-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-538180 ssh -n multinode-538180-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-538180 ssh -n multinode-538180-m02 "sudo cat /home/docker/cp-test_multinode-538180-m03_multinode-538180-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.87s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-538180 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-538180 node stop m03: (1.342097147s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-538180 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-538180 status: exit status 7 (544.620643ms)

                                                
                                                
-- stdout --
	multinode-538180
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-538180-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-538180-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-538180 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-538180 status --alsologtostderr: exit status 7 (598.401372ms)

                                                
                                                
-- stdout --
	multinode-538180
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-538180-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-538180-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 11:14:05.818346 1045715 out.go:360] Setting OutFile to fd 1 ...
	I1213 11:14:05.818479 1045715 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:14:05.818492 1045715 out.go:374] Setting ErrFile to fd 2...
	I1213 11:14:05.818514 1045715 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:14:05.818828 1045715 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-904040/.minikube/bin
	I1213 11:14:05.819075 1045715 out.go:368] Setting JSON to false
	I1213 11:14:05.819120 1045715 mustload.go:66] Loading cluster: multinode-538180
	I1213 11:14:05.819181 1045715 notify.go:221] Checking for updates...
	I1213 11:14:05.820225 1045715 config.go:182] Loaded profile config "multinode-538180": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 11:14:05.820257 1045715 status.go:174] checking status of multinode-538180 ...
	I1213 11:14:05.820885 1045715 cli_runner.go:164] Run: docker container inspect multinode-538180 --format={{.State.Status}}
	I1213 11:14:05.841144 1045715 status.go:371] multinode-538180 host status = "Running" (err=<nil>)
	I1213 11:14:05.841202 1045715 host.go:66] Checking if "multinode-538180" exists ...
	I1213 11:14:05.841654 1045715 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-538180
	I1213 11:14:05.876658 1045715 host.go:66] Checking if "multinode-538180" exists ...
	I1213 11:14:05.876991 1045715 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 11:14:05.877052 1045715 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-538180
	I1213 11:14:05.895977 1045715 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33648 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/multinode-538180/id_rsa Username:docker}
	I1213 11:14:06.007344 1045715 ssh_runner.go:195] Run: systemctl --version
	I1213 11:14:06.016679 1045715 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 11:14:06.031572 1045715 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 11:14:06.114437 1045715 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-13 11:14:06.096672786 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 11:14:06.115078 1045715 kubeconfig.go:125] found "multinode-538180" server: "https://192.168.67.2:8443"
	I1213 11:14:06.115110 1045715 api_server.go:166] Checking apiserver status ...
	I1213 11:14:06.115160 1045715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:14:06.127437 1045715 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1264/cgroup
	I1213 11:14:06.136717 1045715 api_server.go:182] apiserver freezer: "6:freezer:/docker/fdd8d3778c16c4a12d5a0b45a03d3875ab6bff19778cc567b9e1c102d4717efc/crio/crio-d0a136248af2ddea4a417dec21ad0ea1854b4858703e49b12fcebca5a51aa033"
	I1213 11:14:06.136806 1045715 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/fdd8d3778c16c4a12d5a0b45a03d3875ab6bff19778cc567b9e1c102d4717efc/crio/crio-d0a136248af2ddea4a417dec21ad0ea1854b4858703e49b12fcebca5a51aa033/freezer.state
	I1213 11:14:06.145166 1045715 api_server.go:204] freezer state: "THAWED"
	I1213 11:14:06.145198 1045715 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1213 11:14:06.153695 1045715 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1213 11:14:06.153733 1045715 status.go:463] multinode-538180 apiserver status = Running (err=<nil>)
	I1213 11:14:06.153760 1045715 status.go:176] multinode-538180 status: &{Name:multinode-538180 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 11:14:06.153785 1045715 status.go:174] checking status of multinode-538180-m02 ...
	I1213 11:14:06.154138 1045715 cli_runner.go:164] Run: docker container inspect multinode-538180-m02 --format={{.State.Status}}
	I1213 11:14:06.171827 1045715 status.go:371] multinode-538180-m02 host status = "Running" (err=<nil>)
	I1213 11:14:06.171854 1045715 host.go:66] Checking if "multinode-538180-m02" exists ...
	I1213 11:14:06.172173 1045715 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-538180-m02
	I1213 11:14:06.189920 1045715 host.go:66] Checking if "multinode-538180-m02" exists ...
	I1213 11:14:06.190240 1045715 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 11:14:06.190296 1045715 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-538180-m02
	I1213 11:14:06.214996 1045715 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33653 SSHKeyPath:/home/jenkins/minikube-integration/22128-904040/.minikube/machines/multinode-538180-m02/id_rsa Username:docker}
	I1213 11:14:06.322974 1045715 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 11:14:06.336153 1045715 status.go:176] multinode-538180-m02 status: &{Name:multinode-538180-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1213 11:14:06.336248 1045715 status.go:174] checking status of multinode-538180-m03 ...
	I1213 11:14:06.336637 1045715 cli_runner.go:164] Run: docker container inspect multinode-538180-m03 --format={{.State.Status}}
	I1213 11:14:06.354583 1045715 status.go:371] multinode-538180-m03 host status = "Stopped" (err=<nil>)
	I1213 11:14:06.354606 1045715 status.go:384] host is not running, skipping remaining checks
	I1213 11:14:06.354623 1045715 status.go:176] multinode-538180-m03 status: &{Name:multinode-538180-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.49s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (8.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-538180 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-538180 node start m03 -v=5 --alsologtostderr: (7.387922311s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-538180 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (8.18s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (78.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-538180
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-538180
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-538180: (25.081031006s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-538180 --wait=true -v=5 --alsologtostderr
E1213 11:15:08.801792  907484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-769798/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:15:25.725796  907484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-769798/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-538180 --wait=true -v=5 --alsologtostderr: (53.244561568s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-538180
--- PASS: TestMultiNode/serial/RestartKeepsNodes (78.44s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-538180 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-538180 node delete m03: (4.951992231s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-538180 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.67s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-538180 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-538180 stop: (23.848616608s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-538180 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-538180 status: exit status 7 (108.670309ms)

                                                
                                                
-- stdout --
	multinode-538180
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-538180-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-538180 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-538180 status --alsologtostderr: exit status 7 (95.833409ms)

                                                
                                                
-- stdout --
	multinode-538180
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-538180-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 11:16:02.651764 1053546 out.go:360] Setting OutFile to fd 1 ...
	I1213 11:16:02.651908 1053546 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:16:02.651919 1053546 out.go:374] Setting ErrFile to fd 2...
	I1213 11:16:02.651925 1053546 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:16:02.652202 1053546 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-904040/.minikube/bin
	I1213 11:16:02.652451 1053546 out.go:368] Setting JSON to false
	I1213 11:16:02.652485 1053546 mustload.go:66] Loading cluster: multinode-538180
	I1213 11:16:02.652586 1053546 notify.go:221] Checking for updates...
	I1213 11:16:02.652918 1053546 config.go:182] Loaded profile config "multinode-538180": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 11:16:02.652940 1053546 status.go:174] checking status of multinode-538180 ...
	I1213 11:16:02.653874 1053546 cli_runner.go:164] Run: docker container inspect multinode-538180 --format={{.State.Status}}
	I1213 11:16:02.673005 1053546 status.go:371] multinode-538180 host status = "Stopped" (err=<nil>)
	I1213 11:16:02.673030 1053546 status.go:384] host is not running, skipping remaining checks
	I1213 11:16:02.673038 1053546 status.go:176] multinode-538180 status: &{Name:multinode-538180 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 11:16:02.673071 1053546 status.go:174] checking status of multinode-538180-m02 ...
	I1213 11:16:02.673447 1053546 cli_runner.go:164] Run: docker container inspect multinode-538180-m02 --format={{.State.Status}}
	I1213 11:16:02.695631 1053546 status.go:371] multinode-538180-m02 host status = "Stopped" (err=<nil>)
	I1213 11:16:02.695649 1053546 status.go:384] host is not running, skipping remaining checks
	I1213 11:16:02.695665 1053546 status.go:176] multinode-538180-m02 status: &{Name:multinode-538180-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.05s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (58.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-538180 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-538180 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (57.712287852s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-538180 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (58.50s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (30.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-538180
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-538180-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-538180-m02 --driver=docker  --container-runtime=crio: exit status 14 (99.886104ms)

                                                
                                                
-- stdout --
	* [multinode-538180-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22128
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22128-904040/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-904040/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-538180-m02' is duplicated with machine name 'multinode-538180-m02' in profile 'multinode-538180'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-538180-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-538180-m03 --driver=docker  --container-runtime=crio: (27.740467872s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-538180
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-538180: exit status 80 (349.226553ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-538180 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-538180-m03 already exists in multinode-538180-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-538180-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-538180-m03: (2.136810351s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (30.39s)

                                                
                                    
x
+
TestPreload (119.61s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:41: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-772604 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio
E1213 11:18:25.221090  907484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:41: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-772604 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio: (59.666824693s)
preload_test.go:49: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-772604 image pull gcr.io/k8s-minikube/busybox
preload_test.go:49: (dbg) Done: out/minikube-linux-arm64 -p test-preload-772604 image pull gcr.io/k8s-minikube/busybox: (2.131613841s)
preload_test.go:55: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-772604
E1213 11:18:37.840103  907484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/addons-054604/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:55: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-772604: (5.883091005s)
preload_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-772604 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-772604 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (49.27170001s)
preload_test.go:68: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-772604 image list
helpers_test.go:176: Cleaning up "test-preload-772604" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-772604
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-772604: (2.413864769s)
--- PASS: TestPreload (119.61s)

                                                
                                    
x
+
TestScheduledStopUnix (111.54s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-817965 --memory=3072 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-817965 --memory=3072 --driver=docker  --container-runtime=crio: (34.815237293s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-817965 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1213 11:20:10.385604 1067631 out.go:360] Setting OutFile to fd 1 ...
	I1213 11:20:10.385820 1067631 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:20:10.385848 1067631 out.go:374] Setting ErrFile to fd 2...
	I1213 11:20:10.385867 1067631 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:20:10.386162 1067631 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-904040/.minikube/bin
	I1213 11:20:10.386470 1067631 out.go:368] Setting JSON to false
	I1213 11:20:10.386662 1067631 mustload.go:66] Loading cluster: scheduled-stop-817965
	I1213 11:20:10.387087 1067631 config.go:182] Loaded profile config "scheduled-stop-817965": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 11:20:10.387224 1067631 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/scheduled-stop-817965/config.json ...
	I1213 11:20:10.387465 1067631 mustload.go:66] Loading cluster: scheduled-stop-817965
	I1213 11:20:10.387633 1067631 config.go:182] Loaded profile config "scheduled-stop-817965": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-817965 -n scheduled-stop-817965
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-817965 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1213 11:20:10.851534 1067722 out.go:360] Setting OutFile to fd 1 ...
	I1213 11:20:10.851713 1067722 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:20:10.851735 1067722 out.go:374] Setting ErrFile to fd 2...
	I1213 11:20:10.851757 1067722 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:20:10.852177 1067722 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-904040/.minikube/bin
	I1213 11:20:10.852535 1067722 out.go:368] Setting JSON to false
	I1213 11:20:10.853646 1067722 daemonize_unix.go:73] killing process 1067647 as it is an old scheduled stop
	I1213 11:20:10.853740 1067722 mustload.go:66] Loading cluster: scheduled-stop-817965
	I1213 11:20:10.854407 1067722 config.go:182] Loaded profile config "scheduled-stop-817965": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 11:20:10.854499 1067722 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/scheduled-stop-817965/config.json ...
	I1213 11:20:10.854710 1067722 mustload.go:66] Loading cluster: scheduled-stop-817965
	I1213 11:20:10.857775 1067722 config.go:182] Loaded profile config "scheduled-stop-817965": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1213 11:20:10.864013  907484 retry.go:31] will retry after 69.401µs: open /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/scheduled-stop-817965/pid: no such file or directory
I1213 11:20:10.864742  907484 retry.go:31] will retry after 174.794µs: open /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/scheduled-stop-817965/pid: no such file or directory
I1213 11:20:10.865890  907484 retry.go:31] will retry after 280.739µs: open /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/scheduled-stop-817965/pid: no such file or directory
I1213 11:20:10.867127  907484 retry.go:31] will retry after 315.685µs: open /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/scheduled-stop-817965/pid: no such file or directory
I1213 11:20:10.868278  907484 retry.go:31] will retry after 597.037µs: open /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/scheduled-stop-817965/pid: no such file or directory
I1213 11:20:10.869393  907484 retry.go:31] will retry after 1.005218ms: open /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/scheduled-stop-817965/pid: no such file or directory
I1213 11:20:10.870495  907484 retry.go:31] will retry after 1.647533ms: open /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/scheduled-stop-817965/pid: no such file or directory
I1213 11:20:10.872610  907484 retry.go:31] will retry after 1.633806ms: open /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/scheduled-stop-817965/pid: no such file or directory
I1213 11:20:10.875062  907484 retry.go:31] will retry after 1.604682ms: open /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/scheduled-stop-817965/pid: no such file or directory
I1213 11:20:10.877288  907484 retry.go:31] will retry after 5.389338ms: open /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/scheduled-stop-817965/pid: no such file or directory
I1213 11:20:10.883537  907484 retry.go:31] will retry after 7.039248ms: open /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/scheduled-stop-817965/pid: no such file or directory
I1213 11:20:10.890746  907484 retry.go:31] will retry after 6.08609ms: open /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/scheduled-stop-817965/pid: no such file or directory
I1213 11:20:10.898276  907484 retry.go:31] will retry after 14.532556ms: open /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/scheduled-stop-817965/pid: no such file or directory
I1213 11:20:10.913514  907484 retry.go:31] will retry after 11.821553ms: open /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/scheduled-stop-817965/pid: no such file or directory
I1213 11:20:10.925760  907484 retry.go:31] will retry after 20.459539ms: open /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/scheduled-stop-817965/pid: no such file or directory
I1213 11:20:10.947030  907484 retry.go:31] will retry after 44.820447ms: open /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/scheduled-stop-817965/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-817965 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
E1213 11:20:25.726250  907484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-769798/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-817965 -n scheduled-stop-817965
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-817965
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-817965 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1213 11:20:36.811902 1068083 out.go:360] Setting OutFile to fd 1 ...
	I1213 11:20:36.812105 1068083 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:20:36.812132 1068083 out.go:374] Setting ErrFile to fd 2...
	I1213 11:20:36.812151 1068083 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:20:36.812430 1068083 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-904040/.minikube/bin
	I1213 11:20:36.812733 1068083 out.go:368] Setting JSON to false
	I1213 11:20:36.812886 1068083 mustload.go:66] Loading cluster: scheduled-stop-817965
	I1213 11:20:36.813359 1068083 config.go:182] Loaded profile config "scheduled-stop-817965": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 11:20:36.813455 1068083 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/scheduled-stop-817965/config.json ...
	I1213 11:20:36.813752 1068083 mustload.go:66] Loading cluster: scheduled-stop-817965
	I1213 11:20:36.813919 1068083 config.go:182] Loaded profile config "scheduled-stop-817965": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-817965
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-817965: exit status 7 (73.187648ms)

                                                
                                                
-- stdout --
	scheduled-stop-817965
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-817965 -n scheduled-stop-817965
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-817965 -n scheduled-stop-817965: exit status 7 (74.723633ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-817965" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-817965
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-817965: (5.082961299s)
--- PASS: TestScheduledStopUnix (111.54s)

                                                
                                    
x
+
TestInsufficientStorage (12.87s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-960426 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-960426 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (10.299633248s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"8d1dac41-b497-43c6-9de8-3902c052a89c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-960426] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"9696a3b1-87e2-446f-a11a-5ca7700b1a7b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22128"}}
	{"specversion":"1.0","id":"ceb0c512-0e64-4d15-925a-d9410dbaef3e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"8bb3af5b-c40e-4a0e-8c0f-5b3a4a20123b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22128-904040/kubeconfig"}}
	{"specversion":"1.0","id":"5d912df6-957b-44cb-9cb2-83e444aa4aef","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-904040/.minikube"}}
	{"specversion":"1.0","id":"15adfeb3-53f9-495b-964d-131917afca71","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"04ff0ed2-5186-4d33-916d-c9fcd59450a7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"027da72d-0fa5-4275-b4c1-ce44263ca6ba","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"17b7daa8-82d2-4245-9565-619ef75a475e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"b88f295e-b1e0-490b-8a37-a60e492ed990","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"0a7b237b-505d-4e3f-b267-efa5094d2836","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"b0f33bf2-6402-4c1c-8eb9-94b59b61b2be","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-960426\" primary control-plane node in \"insufficient-storage-960426\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"5c383e7f-a951-4c9e-9a7b-be0c5ea30be4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1765275396-22083 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"9c4b106f-2dd3-492e-85dd-fb0ec4770e74","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"0e36d1ef-fe3c-4fa4-bf2c-c1dff80db7bf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-960426 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-960426 --output=json --layout=cluster: exit status 7 (304.168322ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-960426","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-960426","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 11:21:37.643589 1069805 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-960426" does not appear in /home/jenkins/minikube-integration/22128-904040/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-960426 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-960426 --output=json --layout=cluster: exit status 7 (290.096218ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-960426","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-960426","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 11:21:37.932309 1069870 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-960426" does not appear in /home/jenkins/minikube-integration/22128-904040/kubeconfig
	E1213 11:21:37.942471 1069870 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/insufficient-storage-960426/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "insufficient-storage-960426" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-960426
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-960426: (1.977726423s)
--- PASS: TestInsufficientStorage (12.87s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (304.02s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.805301886 start -p running-upgrade-161631 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.805301886 start -p running-upgrade-161631 --memory=3072 --vm-driver=docker  --container-runtime=crio: (34.745311318s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-161631 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1213 11:30:25.726424  907484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-769798/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:31:48.803165  907484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-769798/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:33:25.220843  907484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:33:37.840888  907484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/addons-054604/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-161631 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m25.460069845s)
helpers_test.go:176: Cleaning up "running-upgrade-161631" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-161631
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-161631: (2.037735418s)
--- PASS: TestRunningBinaryUpgrade (304.02s)

                                                
                                    
x
+
TestMissingContainerUpgrade (127.3s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.35.0.1094484065 start -p missing-upgrade-828630 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.35.0.1094484065 start -p missing-upgrade-828630 --memory=3072 --driver=docker  --container-runtime=crio: (1m9.420978351s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-828630
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-828630
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-828630 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-828630 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (53.000105645s)
helpers_test.go:176: Cleaning up "missing-upgrade-828630" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-828630
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-828630: (2.942481208s)
--- PASS: TestMissingContainerUpgrade (127.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-885378 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-885378 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (101.186632ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-885378] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22128
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22128-904040/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-904040/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (44.77s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-885378 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-885378 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (44.276586854s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-885378 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (44.77s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (7.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-885378 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-885378 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (4.771289036s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-885378 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-885378 status -o json: exit status 2 (397.358505ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-885378","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-885378
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-885378: (2.205540056s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (7.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (8.52s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-885378 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-885378 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (8.519343531s)
--- PASS: TestNoKubernetes/serial/Start (8.52s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22128-904040/.minikube/cache/linux/arm64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.44s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-885378 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-885378 "sudo systemctl is-active --quiet service kubelet": exit status 1 (439.085759ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.44s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (6.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:194: (dbg) Done: out/minikube-linux-arm64 profile list: (5.367037181s)
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (6.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.49s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-885378
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-885378: (1.489808806s)
--- PASS: TestNoKubernetes/serial/Stop (1.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-885378 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-885378 --driver=docker  --container-runtime=crio: (7.170351175s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.17s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-885378 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-885378 "sudo systemctl is-active --quiet service kubelet": exit status 1 (277.138217ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.85s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.85s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (303.78s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.896043148 start -p stopped-upgrade-443186 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.896043148 start -p stopped-upgrade-443186 --memory=3072 --vm-driver=docker  --container-runtime=crio: (35.282497874s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.896043148 -p stopped-upgrade-443186 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.896043148 -p stopped-upgrade-443186 stop: (1.231157603s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-443186 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1213 11:25:00.923107  907484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/addons-054604/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:25:25.725873  907484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-769798/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:26:28.293889  907484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:28:25.222935  907484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-200955/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:28:37.840328  907484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/addons-054604/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-443186 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m27.265376109s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (303.78s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.75s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-443186
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-443186: (1.751970386s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.75s)

                                                
                                    
x
+
TestPause/serial/Start (82.56s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-318241 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-318241 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m22.555256627s)
--- PASS: TestPause/serial/Start (82.56s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (28.32s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-318241 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1213 11:35:25.726298  907484 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-904040/.minikube/profiles/functional-769798/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-318241 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (28.299058711s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (28.32s)

                                                
                                    

Test skip (36/316)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.2/cached-images 0
15 TestDownloadOnly/v1.34.2/binaries 0
16 TestDownloadOnly/v1.34.2/kubectl 0
23 TestDownloadOnly/v1.35.0-beta.0/cached-images 0
24 TestDownloadOnly/v1.35.0-beta.0/binaries 0
25 TestDownloadOnly/v1.35.0-beta.0/kubectl 0
29 TestDownloadOnlyKic 0.48
31 TestOffline 0
42 TestAddons/serial/GCPAuth/RealCredentials 0
49 TestAddons/parallel/Olm 0
56 TestAddons/parallel/AmdGpuDevicePlugin 0
60 TestDockerFlags 0
63 TestDockerEnvContainerd 0
64 TestHyperKitDriverInstallOrUpdate 0
65 TestHyperkitDriverSkipUpgrade 0
112 TestFunctional/parallel/MySQL 0
116 TestFunctional/parallel/DockerEnv 0
117 TestFunctional/parallel/PodmanEnv 0
130 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0
131 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
132 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0
207 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL 0
211 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv 0
212 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv 0
224 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig 0
225 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
226 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS 0
261 TestGvisorAddon 0
283 TestImageBuild 0
284 TestISOImage 0
348 TestChangeNoneUser 0
351 TestScheduledStopWindows 0
353 TestSkaffold 0
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.48s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-457146 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:176: Cleaning up "download-docker-457146" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-457146
--- SKIP: TestDownloadOnlyKic (0.48s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:761: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:485: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1035: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
Copied to clipboard